The AWS CLI is great, but building your own S3 tool in Go teaches you the SDK patterns you’ll reuse in every AWS project. We’ll start with the absolute minimum, listing buckets, and work our way up to a useful tool with uploads, downloads, and presigned URLs.
Each step adds exactly one capability. No jumping ahead.
What We’re Building
A command-line tool called s3tool that can:
- List buckets
- List objects in a bucket
- Upload files
- Download files
- Generate presigned URLs
We’ll build each feature one at a time, hitting real problems along the way.
Prerequisites
- Go 1.21+ installed
- AWS CLI configured (
aws s3 lsshould work) - At least one S3 bucket to test with
Step 1: List Buckets (The Minimum)
What: Connect to AWS and list all S3 buckets.
Why: This is the “hello world” of the AWS SDK. If this works, your credentials, region, and SDK are all set up correctly.
mkdir s3tool && cd s3tool
go mod init s3tool
go get github.com/aws/aws-sdk-go-v2/config
go get github.com/aws/aws-sdk-go-v2/service/s3
main.go
package main
import (
"context"
"fmt"
"log"
"github.com/aws/aws-sdk-go-v2/config"
"github.com/aws/aws-sdk-go-v2/service/s3"
)
func main() {
cfg, err := config.LoadDefaultConfig(context.TODO())
if err != nil {
log.Fatalf("unable to load SDK config: %v", err)
}
client := s3.NewFromConfig(cfg)
result, err := client.ListBuckets(context.TODO(), &s3.ListBucketsInput{})
if err != nil {
log.Fatalf("unable to list buckets: %v", err)
}
fmt.Printf("Found %d buckets:\n", len(result.Buckets))
for _, bucket := range result.Buckets {
fmt.Printf(" %s (created: %s)\n", *bucket.Name, bucket.CreationDate.Format("2006-01-02"))
}
}
config.LoadDefaultConfig reads your AWS credentials from the same places the AWS CLI does: environment variables, ~/.aws/credentials, IAM roles. The context.TODO() is a placeholder context. We’ll improve this later.
Notice *bucket.Name. The SDK returns string pointers, not strings. This is because many AWS API fields are optional, and Go uses nil pointers to represent “not set.” You’ll see this pattern everywhere in the AWS SDK.
go run main.go
Expected output:
Found 3 buckets:
my-app-assets (created: 2024-03-15)
my-logs-bucket (created: 2023-11-20)
my-terraform-state (created: 2024-01-08)
If you get a credentials error, run aws sts get-caller-identity to verify your AWS CLI setup.
Step 2: List Objects in a Bucket
What: Accept a bucket name as an argument and list its contents.
Why: Listing buckets proves the SDK works. Listing objects proves we can interact with a specific bucket and introduces our first command-line argument.
main.go
package main
import (
"context"
"fmt"
"log"
"os"
"github.com/aws/aws-sdk-go-v2/config"
"github.com/aws/aws-sdk-go-v2/service/s3"
)
func main() {
if len(os.Args) < 2 {
fmt.Println("Usage: s3tool <command> [args]")
fmt.Println("Commands:")
fmt.Println(" buckets List all buckets")
fmt.Println(" ls <bucket> List objects in a bucket")
os.Exit(1)
}
cfg, err := config.LoadDefaultConfig(context.TODO())
if err != nil {
log.Fatalf("unable to load SDK config: %v", err)
}
client := s3.NewFromConfig(cfg)
command := os.Args[1]
switch command {
case "buckets":
listBuckets(client)
case "ls":
if len(os.Args) < 3 {
log.Fatal("usage: s3tool ls <bucket-name>")
}
listObjects(client, os.Args[2])
default:
log.Fatalf("unknown command: %s", command)
}
}
func listBuckets(client *s3.Client) {
result, err := client.ListBuckets(context.TODO(), &s3.ListBucketsInput{})
if err != nil {
log.Fatalf("unable to list buckets: %v", err)
}
fmt.Printf("Found %d buckets:\n", len(result.Buckets))
for _, bucket := range result.Buckets {
fmt.Printf(" %s (created: %s)\n", *bucket.Name, bucket.CreationDate.Format("2006-01-02"))
}
}
func listObjects(client *s3.Client, bucket string) {
result, err := client.ListObjectsV2(context.TODO(), &s3.ListObjectsV2Input{
Bucket: &bucket,
})
if err != nil {
log.Fatalf("unable to list objects in %s: %v", bucket, err)
}
fmt.Printf("Objects in %s (%d):\n", bucket, len(result.Contents))
for _, obj := range result.Contents {
size := *obj.Size
unit := "B"
if size > 1024*1024 {
size = size / (1024 * 1024)
unit = "MB"
} else if size > 1024 {
size = size / 1024
unit = "KB"
}
fmt.Printf(" %-40s %5d %s %s\n", *obj.Key, size, unit, obj.LastModified.Format("2006-01-02 15:04"))
}
}
We restructured main() into a command router using a switch statement. Each command is its own function that takes the S3 client. The listObjects function uses ListObjectsV2 (always use V2, not V1 which is legacy) and formats the output with human-readable sizes.
Notice Bucket: &bucket. We pass a pointer because the SDK expects *string. This is the most annoying part of the AWS Go SDK and you’ll get used to it.
go run main.go buckets
go run main.go ls my-app-assets
Expected output:
Objects in my-app-assets (5):
images/logo.png 45 KB 2024-06-15 09:30
images/hero.jpg 312 KB 2024-06-15 09:31
css/style.css 12 KB 2024-07-01 14:22
js/app.js 89 KB 2024-07-01 14:22
index.html 3 KB 2024-07-10 11:45
But there’s a problem. Try this on a bucket with more than 1,000 objects. You’ll only get the first 1,000. ListObjectsV2 paginates by default and we’re only reading the first page. Let’s fix that.
Step 3: Handle Pagination
What: Fetch all objects, not just the first 1,000.
Why: The 1,000-object limit is a real production bug waiting to happen. Your tool works perfectly in dev (50 objects) and silently returns incomplete results in prod (50,000 objects). This is exactly the kind of mistake that ships because the happy path worked.
Update the listObjects function:
main.go (replace the listObjects function):
func listObjects(client *s3.Client, bucket string) {
var allObjects []types.Object
paginator := s3.NewListObjectsV2Paginator(client, &s3.ListObjectsV2Input{
Bucket: &bucket,
})
for paginator.HasMorePages() {
page, err := paginator.NextPage(context.TODO())
if err != nil {
log.Fatalf("unable to list objects in %s: %v", bucket, err)
}
allObjects = append(allObjects, page.Contents...)
}
fmt.Printf("Objects in %s (%d):\n", bucket, len(allObjects))
for _, obj := range allObjects {
size := *obj.Size
unit := "B"
if size > 1024*1024 {
size = size / (1024 * 1024)
unit = "MB"
} else if size > 1024 {
size = size / 1024
unit = "KB"
}
fmt.Printf(" %-40s %5d %s %s\n", *obj.Key, size, unit, obj.LastModified.Format("2006-01-02 15:04"))
}
}
Add the types import at the top:
import (
// ... existing imports ...
"github.com/aws/aws-sdk-go-v2/service/s3/types"
)
And get the dependency:
go get github.com/aws/aws-sdk-go-v2/service/s3/types
The SDK provides a Paginator that handles the continuation token for you. HasMorePages() returns true until all pages are fetched. This pattern is the same across all AWS services (DynamoDB, CloudWatch, EC2), so learn it once and use it everywhere.
go run main.go ls my-big-bucket
Now it returns all objects, no matter how many there are.
Step 4: Upload Files
What: Upload a local file to S3.
Why: Reading from S3 is only half the story. Uploading is where you hit new problems: content types, file handling, and the “I forgot to close the file” bug.
Add the upload command to your switch statement in main():
case "upload":
if len(os.Args) < 4 {
log.Fatal("usage: s3tool upload <file> <bucket/key>")
}
uploadFile(client, os.Args[2], os.Args[3])
Add the function:
func uploadFile(client *s3.Client, filePath string, destination string) {
// Parse bucket/key from destination
parts := strings.SplitN(destination, "/", 2)
if len(parts) != 2 {
log.Fatal("destination must be in format: bucket/key")
}
bucket, key := parts[0], parts[1]
// Open the file
file, err := os.Open(filePath)
if err != nil {
log.Fatalf("unable to open %s: %v", filePath, err)
}
defer file.Close()
// Get file info for size reporting
stat, _ := file.Stat()
// Upload
_, err = client.PutObject(context.TODO(), &s3.PutObjectInput{
Bucket: &bucket,
Key: &key,
Body: file,
})
if err != nil {
log.Fatalf("unable to upload %s: %v", filePath, err)
}
fmt.Printf("uploaded %s to s3://%s/%s (%d bytes)\n", filePath, bucket, key, stat.Size())
}
Add "strings" to your imports.
PutObject accepts an io.Reader for the body. os.File implements this interface, so we pass the file directly. No need to read the entire file into memory first. The defer file.Close() ensures the file handle is released even if the upload fails.
echo "test content" > test.txt
go run main.go upload test.txt my-app-assets/test.txt
Expected output:
uploaded test.txt to s3://my-app-assets/test.txt (13 bytes)
This works for small files. For files over 5GB, you’d need multipart upload, but that’s a separate topic. Most real-world uploads are under 5GB.
Step 5: Download Files
What: Download an S3 object to a local file.
Why: Completes the read/write cycle. Also introduces a common mistake: downloading to a file that already exists.
Add the download command:
case "download":
if len(os.Args) < 4 {
log.Fatal("usage: s3tool download <bucket/key> <file>")
}
downloadFile(client, os.Args[2], os.Args[3])
Add the function:
func downloadFile(client *s3.Client, source string, filePath string) {
// Parse bucket/key
parts := strings.SplitN(source, "/", 2)
if len(parts) != 2 {
log.Fatal("source must be in format: bucket/key")
}
bucket, key := parts[0], parts[1]
// Check if file already exists
if _, err := os.Stat(filePath); err == nil {
log.Fatalf("file %s already exists (use a different name or delete it first)", filePath)
}
// Download
result, err := client.GetObject(context.TODO(), &s3.GetObjectInput{
Bucket: &bucket,
Key: &key,
})
if err != nil {
log.Fatalf("unable to download s3://%s/%s: %v", bucket, key, err)
}
defer result.Body.Close()
// Write to file
outFile, err := os.Create(filePath)
if err != nil {
log.Fatalf("unable to create %s: %v", filePath, err)
}
defer outFile.Close()
written, err := io.Copy(outFile, result.Body)
if err != nil {
log.Fatalf("unable to write %s: %v", filePath, err)
}
fmt.Printf("downloaded s3://%s/%s to %s (%d bytes)\n", bucket, key, filePath, written)
}
Add "io" to your imports.
We check if the local file exists first. Silently overwriting files is a common CLI tool mistake. GetObject returns a response with a Body that implements io.ReadCloser. We use io.Copy to stream it to disk without loading the entire file into memory.
go run main.go download my-app-assets/test.txt downloaded.txt
Expected output:
downloaded s3://my-app-assets/test.txt to downloaded.txt (13 bytes)
Try downloading to the same file again:
file downloaded.txt already exists (use a different name or delete it first)
Good. It protects you from accidental overwrites.
Step 6: Generate Presigned URLs
What: Generate a temporary download URL that anyone can use without AWS credentials.
Why: This is one of the most practical S3 features. Share a file with someone who doesn’t have AWS access, set an expiry, and the URL handles authentication automatically.
You need one more dependency:
go get github.com/aws/aws-sdk-go-v2/service/s3/presign
Add the presign command:
case "presign":
if len(os.Args) < 3 {
log.Fatal("usage: s3tool presign <bucket/key>")
}
presignURL(client, os.Args[2])
Add the function:
func presignURL(client *s3.Client, source string) {
parts := strings.SplitN(source, "/", 2)
if len(parts) != 2 {
log.Fatal("source must be in format: bucket/key")
}
bucket, key := parts[0], parts[1]
presigner := s3.NewPresignClient(client)
result, err := presigner.PresignGetObject(context.TODO(), &s3.GetObjectInput{
Bucket: &bucket,
Key: &key,
}, s3.WithPresignExpires(15*time.Minute))
if err != nil {
log.Fatalf("unable to presign URL: %v", err)
}
fmt.Printf("Presigned URL (expires in 15 minutes):\n%s\n", result.URL)
}
Add "time" to your imports.
NewPresignClient wraps your existing S3 client and adds presigning capabilities. WithPresignExpires sets the URL lifetime. 15 minutes is a reasonable default. The URL includes a signature that grants temporary read access to that specific object.
go run main.go presign my-app-assets/test.txt
Expected output:
Presigned URL (expires in 15 minutes):
https://my-app-assets.s3.us-east-1.amazonaws.com/test.txt?X-Amz-Algorithm=AWS4-HMAC...
You can open that URL in a browser or share it with anyone. After 15 minutes, it stops working.
Build the Binary
Now let’s compile it into a proper CLI tool:
go build -o s3tool main.go
./s3tool buckets
./s3tool ls my-bucket
./s3tool upload file.txt my-bucket/file.txt
./s3tool download my-bucket/file.txt local-file.txt
./s3tool presign my-bucket/file.txt
What We Built
Starting from a 20-line bucket lister, we incrementally built a practical S3 tool:
- Bucket listing: verify SDK and credentials work
- Object listing: interact with a specific bucket
- Pagination: handle the 1,000-object limit that silently breaks in production
- Upload: stream files to S3 without loading into memory
- Download: stream from S3 with overwrite protection
- Presigned URLs: temporary access without AWS credentials
Each step introduced one new SDK pattern. These same patterns (paginators, PutObject/GetObject, presigning) apply to every Go + AWS project you’ll build.
Next Steps
- Add a
--prefixflag to filter objects inls - Add
--expiresflag to customize presign duration - Add concurrent uploads for directories using goroutines
- Wrap this into a Lambda function for serverless file processing
- Provision the S3 bucket itself with Terraform instead of creating it manually in the console
Check out Build and Deploy a Go Lambda Function to see how Go and Lambda work together.
Cheat Sheet
Copy-paste reference for Go + S3.
Setup:
cfg, _ := config.LoadDefaultConfig(context.TODO())
client := s3.NewFromConfig(cfg)
List buckets:
result, _ := client.ListBuckets(ctx, &s3.ListBucketsInput{})
for _, b := range result.Buckets { fmt.Println(*b.Name) }
List objects (with pagination):
paginator := s3.NewListObjectsV2Paginator(client, &s3.ListObjectsV2Input{Bucket: &bucket})
for paginator.HasMorePages() {
page, _ := paginator.NextPage(ctx)
for _, obj := range page.Contents { fmt.Println(*obj.Key) }
}
Upload a file:
file, _ := os.Open("local.txt")
defer file.Close()
client.PutObject(ctx, &s3.PutObjectInput{Bucket: &bucket, Key: &key, Body: file})
Download a file:
result, _ := client.GetObject(ctx, &s3.GetObjectInput{Bucket: &bucket, Key: &key})
defer result.Body.Close()
outFile, _ := os.Create("local.txt")
defer outFile.Close()
io.Copy(outFile, result.Body)
Presigned URL (15 min):
presigner := s3.NewPresignClient(client)
req, _ := presigner.PresignGetObject(ctx, &s3.GetObjectInput{
Bucket: &bucket, Key: &key,
}, s3.WithPresignExpires(15*time.Minute))
fmt.Println(req.URL)
Key rules to remember:
- Always use
ListObjectsV2(not V1). V1 is legacy ListObjectsV2returns max 1000 objects per page, so use the paginator for large buckets- The SDK uses
*stringpointers everywhere. Pass&myStringor useaws.String("value") PutObjectacceptsio.Reader. Pass the file directly, don’t read it into memoryGetObjectreturnsio.ReadCloser. Alwaysdefer result.Body.Close()- Presigned URLs expire. Default is 15 minutes, max is 7 days