Skip main navigation
/user/kayd @ devops :~$ cat go-sqs-message-queue.md

Go + SQS: Build a Message Queue Processor Go + SQS: Build a Message Queue Processor

QR Code linking to: Go + SQS: Build a Message Queue Processor
Karandeep Singh
Karandeep Singh
• 15 minutes

Summary

Build an SQS message processor in Go step by step. Send messages, hit the visibility timeout trap, add batch operations, and build a production-ready polling loop.

SQS (Simple Queue Service) is the glue between microservices. One service drops a message on the queue, another picks it up and processes it. It sounds simple, but there are real traps: messages that reappear after you’ve already read them, messages you think are deleted but aren’t, and the “I’m only getting one message at a time” performance problem.

We’ll hit all of these on purpose so you understand the fixes.

What We’re Building

A job processor. One program sends deployment notifications to a queue, another reads and processes them. Think of it as the notification system behind a CI/CD pipeline.

The journey:

  1. Create a queue and send one message
  2. Receive the message, and watch it come back (the visibility timeout trap)
  3. Receive and delete properly
  4. Send messages in batches (10x faster)
  5. Build a polling loop that runs continuously
  6. Add a dead letter queue for messages that keep failing

Prerequisites

  • Go 1.21+ installed
  • AWS CLI configured (aws sts get-caller-identity should work)
  • An AWS account with SQS permissions

Step 1: Create a Queue and Send a Message

What: Create an SQS queue and put one message on it.

Why: This is the “hello world” of SQS. If this works, your credentials and SDK are set up right.

Create your project:

mkdir go-sqs-processor && cd go-sqs-processor
go mod init go-sqs-processor
go get github.com/aws/aws-sdk-go-v2/config
go get github.com/aws/aws-sdk-go-v2/service/sqs

main.go

package main

import (
	"context"
	"fmt"
	"log"

	"github.com/aws/aws-sdk-go-v2/aws"
	"github.com/aws/aws-sdk-go-v2/config"
	"github.com/aws/aws-sdk-go-v2/service/sqs"
)

func main() {
	cfg, err := config.LoadDefaultConfig(context.TODO())
	if err != nil {
		log.Fatal("config error:", err)
	}
	client := sqs.NewFromConfig(cfg)

	// Create a queue
	createResult, err := client.CreateQueue(context.TODO(), &sqs.CreateQueueInput{
		QueueName: aws.String("deploy-notifications"),
	})
	if err != nil {
		log.Fatal("failed to create queue:", err)
	}
	queueURL := *createResult.QueueUrl
	fmt.Println("queue created:", queueURL)

	// Send a message
	_, err = client.SendMessage(context.TODO(), &sqs.SendMessageInput{
		QueueUrl:    &queueURL,
		MessageBody: aws.String("auth-api deployed v1.4.2 successfully"),
	})
	if err != nil {
		log.Fatal("failed to send:", err)
	}
	fmt.Println("message sent!")
}

SQS uses queue URLs instead of queue names for all operations after creation. CreateQueue returns the URL, and you pass it to every subsequent call. If the queue already exists with the same settings, CreateQueue just returns the existing URL. It won’t error.

The message body is a plain string. In practice, you’d send JSON, but we’ll start simple.

Run it:

go run main.go

Expected output:

queue created: https://sqs.us-east-1.amazonaws.com/123456789/deploy-notifications
message sent!

The message is now sitting in the queue waiting to be picked up. Let’s go get it.

Step 2: Receive the Message (And Watch It Come Back)

What: Read the message from the queue.

Why: This is where SQS surprises everyone. When you receive a message, SQS doesn’t delete it. It just hides it for 30 seconds. If you don’t delete it in time, it comes right back. This is called the visibility timeout, and it’s the number one SQS gotcha.

Replace your entire main.go:

main.go

package main

import (
	"context"
	"fmt"
	"log"
	"time"

	"github.com/aws/aws-sdk-go-v2/aws"
	"github.com/aws/aws-sdk-go-v2/config"
	"github.com/aws/aws-sdk-go-v2/service/sqs"
)

func main() {
	cfg, err := config.LoadDefaultConfig(context.TODO())
	if err != nil {
		log.Fatal("config error:", err)
	}
	client := sqs.NewFromConfig(cfg)

	// Get the queue URL (queue already exists from Step 1)
	urlResult, err := client.GetQueueUrl(context.TODO(), &sqs.GetQueueUrlInput{
		QueueName: aws.String("deploy-notifications"),
	})
	if err != nil {
		log.Fatal("queue not found:", err)
	}
	queueURL := *urlResult.QueueUrl

	// Receive message — first time
	fmt.Println("=== First receive ===")
	result, err := client.ReceiveMessage(context.TODO(), &sqs.ReceiveMessageInput{
		QueueUrl:            &queueURL,
		MaxNumberOfMessages: 1,
	})
	if err != nil {
		log.Fatal("receive failed:", err)
	}

	if len(result.Messages) == 0 {
		fmt.Println("no messages (queue might be empty)")
		return
	}

	msg := result.Messages[0]
	fmt.Printf("got message: %s\n", *msg.Body)
	fmt.Println("NOT deleting it — let's see what happens...")

	// Wait for visibility timeout to expire (default is 30 seconds)
	fmt.Println("\nwaiting 35 seconds for visibility timeout...")
	time.Sleep(35 * time.Second)

	// Receive again — the SAME message comes back!
	fmt.Println("=== Second receive ===")
	result2, err := client.ReceiveMessage(context.TODO(), &sqs.ReceiveMessageInput{
		QueueUrl:            &queueURL,
		MaxNumberOfMessages: 1,
	})
	if err != nil {
		log.Fatal("receive failed:", err)
	}

	if len(result2.Messages) > 0 {
		fmt.Printf("got message AGAIN: %s\n", *result2.Messages[0].Body)
		fmt.Println("same message came back because we didn't delete it!")
	}
}

GetQueueUrl looks up a queue by name and returns its URL. We use this instead of hardcoding the URL from Step 1.

ReceiveMessage with MaxNumberOfMessages: 1 asks for one message at a time. The message becomes invisible to other consumers for 30 seconds (the default visibility timeout). After that, SQS assumes your consumer crashed and makes the message available again.

Run it:

go run main.go

Expected output:

=== First receive ===
got message: auth-api deployed v1.4.2 successfully
NOT deleting it — let's see what happens...

waiting 35 seconds for visibility timeout...
=== Second receive ===
got message AGAIN: auth-api deployed v1.4.2 successfully
same message came back because we didn't delete it!

This is by design. SQS guarantees at-least-once delivery. If your consumer crashes before finishing, the message gets reprocessed. But you have to delete the message yourself after you’re done with it. Let’s fix that.

Step 3: Receive and Delete (The Right Way)

What: Receive a message, process it, then delete it so it doesn’t come back.

Why: This is the correct SQS pattern: receive → process → delete. Skip the delete and your messages pile up and get reprocessed forever.

Replace your entire main.go:

main.go

package main

import (
	"context"
	"fmt"
	"log"

	"github.com/aws/aws-sdk-go-v2/aws"
	"github.com/aws/aws-sdk-go-v2/config"
	"github.com/aws/aws-sdk-go-v2/service/sqs"
)

func main() {
	cfg, err := config.LoadDefaultConfig(context.TODO())
	if err != nil {
		log.Fatal("config error:", err)
	}
	client := sqs.NewFromConfig(cfg)

	urlResult, err := client.GetQueueUrl(context.TODO(), &sqs.GetQueueUrlInput{
		QueueName: aws.String("deploy-notifications"),
	})
	if err != nil {
		log.Fatal("queue not found:", err)
	}
	queueURL := *urlResult.QueueUrl

	// Receive
	result, err := client.ReceiveMessage(context.TODO(), &sqs.ReceiveMessageInput{
		QueueUrl:            &queueURL,
		MaxNumberOfMessages: 1,
	})
	if err != nil {
		log.Fatal("receive failed:", err)
	}

	if len(result.Messages) == 0 {
		fmt.Println("no messages in queue")
		return
	}

	msg := result.Messages[0]
	fmt.Printf("processing: %s\n", *msg.Body)

	// Process the message (your business logic goes here)
	fmt.Println("processing complete")

	// Delete it so it doesn't come back
	_, err = client.DeleteMessage(context.TODO(), &sqs.DeleteMessageInput{
		QueueUrl:      &queueURL,
		ReceiptHandle: msg.ReceiptHandle,
	})
	if err != nil {
		log.Fatal("delete failed:", err)
	}
	fmt.Println("message deleted from queue")

	// Try to receive again — should be empty
	result2, _ := client.ReceiveMessage(context.TODO(), &sqs.ReceiveMessageInput{
		QueueUrl:            &queueURL,
		MaxNumberOfMessages: 1,
	})
	if len(result2.Messages) == 0 {
		fmt.Println("queue is now empty — message won't come back")
	}
}

The key is ReceiptHandle. Every time you receive a message, SQS gives you a unique receipt handle for that specific receive. You pass this handle to DeleteMessage to tell SQS “I’m done with this one, remove it permanently.” The receipt handle changes every time the message is received, so you can’t reuse an old one.

Run it:

go run main.go

Expected output:

processing: auth-api deployed v1.4.2 successfully
processing complete
message deleted from queue
queue is now empty — message won't come back

That’s the complete receive-process-delete cycle. But sending and receiving one message at a time is slow. Let’s speed it up.

Step 4: Send Messages in Batches

What: Send up to 10 messages in a single API call instead of one at a time.

Why: Sending messages one by one means one API call per message. If you need to send 100 deploy notifications, that’s 100 HTTP requests. SendMessageBatch lets you send 10 per call, so that’s 10 calls instead of 100. Same result, 10x fewer API calls.

Replace your entire main.go:

main.go

package main

import (
	"context"
	"fmt"
	"log"

	"github.com/aws/aws-sdk-go-v2/aws"
	"github.com/aws/aws-sdk-go-v2/config"
	"github.com/aws/aws-sdk-go-v2/service/sqs"
	"github.com/aws/aws-sdk-go-v2/service/sqs/types"
)

func main() {
	cfg, err := config.LoadDefaultConfig(context.TODO())
	if err != nil {
		log.Fatal("config error:", err)
	}
	client := sqs.NewFromConfig(cfg)

	urlResult, err := client.GetQueueUrl(context.TODO(), &sqs.GetQueueUrlInput{
		QueueName: aws.String("deploy-notifications"),
	})
	if err != nil {
		log.Fatal("queue not found:", err)
	}
	queueURL := *urlResult.QueueUrl

	// Build a batch of deploy notifications
	messages := []types.SendMessageBatchRequestEntry{
		{Id: aws.String("1"), MessageBody: aws.String("auth-api deployed v1.4.2 — success")},
		{Id: aws.String("2"), MessageBody: aws.String("auth-api deployed v1.4.3 — failed")},
		{Id: aws.String("3"), MessageBody: aws.String("payment-svc deployed v2.1.0 — success")},
		{Id: aws.String("4"), MessageBody: aws.String("payment-svc deployed v2.1.1 — success")},
		{Id: aws.String("5"), MessageBody: aws.String("user-svc deployed v3.0.0 — success")},
	}

	// Send all 5 in one API call
	result, err := client.SendMessageBatch(context.TODO(), &sqs.SendMessageBatchInput{
		QueueUrl: &queueURL,
		Entries:  messages,
	})
	if err != nil {
		log.Fatal("batch send failed:", err)
	}

	fmt.Printf("sent %d messages successfully\n", len(result.Successful))

	// Check for any failures
	if len(result.Failed) > 0 {
		for _, fail := range result.Failed {
			fmt.Printf("FAILED to send message %s: %s\n", *fail.Id, *fail.Message)
		}
	}
}

Each entry in the batch needs a unique Id. This is just for tracking which messages succeeded or failed in the response. It’s not the SQS message ID (SQS assigns that).

The batch limit is 10 messages or 256KB total, whichever comes first. If you have more than 10 messages, chunk them into batches of 10 and send each batch separately.

Run it:

go run main.go

Expected output:

sent 5 messages successfully

Five messages sent in one API call. Now let’s build something that reads them continuously.

Step 5: Build a Polling Loop

What: A program that runs continuously, picks up messages as they arrive, processes them, and deletes them.

Why: The previous steps were one-shot: run once, exit. A real message processor runs forever, waiting for new messages. This is the pattern every SQS consumer in production uses.

Replace your entire main.go:

main.go

package main

import (
	"context"
	"fmt"
	"log"
	"os"
	"os/signal"
	"strings"

	"github.com/aws/aws-sdk-go-v2/aws"
	"github.com/aws/aws-sdk-go-v2/config"
	"github.com/aws/aws-sdk-go-v2/service/sqs"
)

func main() {
	cfg, err := config.LoadDefaultConfig(context.TODO())
	if err != nil {
		log.Fatal("config error:", err)
	}
	client := sqs.NewFromConfig(cfg)

	urlResult, err := client.GetQueueUrl(context.TODO(), &sqs.GetQueueUrlInput{
		QueueName: aws.String("deploy-notifications"),
	})
	if err != nil {
		log.Fatal("queue not found:", err)
	}
	queueURL := *urlResult.QueueUrl

	// Handle Ctrl+C gracefully
	ctx, stop := signal.NotifyContext(context.Background(), os.Interrupt)
	defer stop()

	fmt.Println("listening for messages... (Ctrl+C to stop)")
	processed := 0

	for {
		// Check if we got Ctrl+C
		select {
		case <-ctx.Done():
			fmt.Printf("\nstopping — processed %d messages total\n", processed)
			return
		default:
		}

		// Long polling — wait up to 20 seconds for messages
		result, err := client.ReceiveMessage(ctx, &sqs.ReceiveMessageInput{
			QueueUrl:            &queueURL,
			MaxNumberOfMessages: 10,
			WaitTimeSeconds:     20,
		})
		if err != nil {
			if ctx.Err() != nil {
				fmt.Printf("\nstopping — processed %d messages total\n", processed)
				return
			}
			log.Fatal("receive error:", err)
		}

		if len(result.Messages) == 0 {
			continue
		}

		// Process each message
		for _, msg := range result.Messages {
			body := *msg.Body
			fmt.Printf("processing: %s\n", body)

			// Your business logic here
			if strings.Contains(body, "failed") {
				fmt.Println("  >> ALERT: deployment failure detected!")
			} else {
				fmt.Println("  >> OK")
			}

			// Delete after successful processing
			_, err := client.DeleteMessage(ctx, &sqs.DeleteMessageInput{
				QueueUrl:      &queueURL,
				ReceiptHandle: msg.ReceiptHandle,
			})
			if err != nil {
				log.Printf("failed to delete message: %v", err)
				continue
			}
			processed++
		}
	}
}

Two important settings here.

WaitTimeSeconds: 20 enables long polling. Without it, ReceiveMessage returns immediately even if the queue is empty. That’s short polling, and it wastes API calls. With long polling, SQS waits up to 20 seconds for messages to arrive before returning an empty response. This saves you money and reduces latency.

MaxNumberOfMessages: 10 asks for up to 10 messages at once. SQS might return fewer depending on what’s available, but it won’t return more than 10.

The signal.NotifyContext pattern gives us a clean shutdown. When you press Ctrl+C, the context gets cancelled, and the loop exits gracefully instead of crashing.

Run it:

go run main.go

Expected output:

listening for messages... (Ctrl+C to stop)
processing: auth-api deployed v1.4.2 — success
  >> OK
processing: auth-api deployed v1.4.3 — failed
  >> ALERT: deployment failure detected!
processing: payment-svc deployed v2.1.0 — success
  >> OK
processing: payment-svc deployed v2.1.1 — success
  >> OK
processing: user-svc deployed v3.0.0 — success
  >> OK

It processes all 5 messages from Step 4, then waits for more. Press Ctrl+C to stop:

^C
stopping — processed 5 messages total

Step 6: Dead Letter Queue

What: Set up a second queue that catches messages that keep failing.

Why: Sometimes a message can’t be processed: bad JSON, missing data, a bug in your code. Without a dead letter queue (DLQ), that message gets retried forever, blocking your consumer. A DLQ captures these “poison pill” messages after a set number of failed attempts so your main queue keeps flowing.

Replace your entire main.go:

main.go

package main

import (
	"context"
	"encoding/json"
	"fmt"
	"log"
	"time"

	"github.com/aws/aws-sdk-go-v2/aws"
	"github.com/aws/aws-sdk-go-v2/config"
	"github.com/aws/aws-sdk-go-v2/service/sqs"
)

func main() {
	cfg, err := config.LoadDefaultConfig(context.TODO())
	if err != nil {
		log.Fatal("config error:", err)
	}
	client := sqs.NewFromConfig(cfg)

	// Step A: Create the dead letter queue first
	dlqResult, err := client.CreateQueue(context.TODO(), &sqs.CreateQueueInput{
		QueueName: aws.String("deploy-notifications-dlq"),
	})
	if err != nil {
		log.Fatal("failed to create DLQ:", err)
	}
	dlqURL := *dlqResult.QueueUrl
	fmt.Println("DLQ created:", dlqURL)

	// Get the DLQ's ARN (we need it to link the queues)
	dlqAttrs, err := client.GetQueueAttributes(context.TODO(), &sqs.GetQueueAttributesInput{
		QueueUrl:       &dlqURL,
		AttributeNames: []string{"QueueArn"},
	})
	if err != nil {
		log.Fatal("failed to get DLQ ARN:", err)
	}
	dlqARN := dlqAttrs.Attributes["QueueArn"]
	fmt.Println("DLQ ARN:", dlqARN)

	// Step B: Create the main queue with a redrive policy pointing to the DLQ
	redrivePolicy, _ := json.Marshal(map[string]string{
		"deadLetterTargetArn": dlqARN,
		"maxReceiveCount":     "3",
	})

	mainResult, err := client.CreateQueue(context.TODO(), &sqs.CreateQueueInput{
		QueueName: aws.String("deploy-notifications-v2"),
		Attributes: map[string]string{
			"RedrivePolicy": string(redrivePolicy),
		},
	})
	if err != nil {
		log.Fatal("failed to create main queue:", err)
	}
	mainURL := *mainResult.QueueUrl
	fmt.Println("main queue created:", mainURL)

	// Step C: Send a test message
	_, err = client.SendMessage(context.TODO(), &sqs.SendMessageInput{
		QueueUrl:    &mainURL,
		MessageBody: aws.String("this message will fail processing 3 times"),
	})
	if err != nil {
		log.Fatal("send failed:", err)
	}
	fmt.Println("\nsent a test message")

	// Step D: Receive it 3 times WITHOUT deleting (simulating failures)
	for i := 1; i <= 3; i++ {
		result, err := client.ReceiveMessage(context.TODO(), &sqs.ReceiveMessageInput{
			QueueUrl:            &mainURL,
			MaxNumberOfMessages: 1,
			WaitTimeSeconds:     5,
			VisibilityTimeout:   2, // short timeout so it comes back fast
		})
		if err != nil {
			log.Fatal("receive error:", err)
		}
		if len(result.Messages) > 0 {
			fmt.Printf("attempt %d: received '%s' — NOT deleting (simulating failure)\n", i, *result.Messages[0].Body)
		} else {
			fmt.Printf("attempt %d: no message available yet\n", i)
			i-- // retry this attempt
		}

		// Wait for visibility timeout to expire before next attempt
		if i < 3 {
			fmt.Println("  waiting 3 seconds for message to become visible again...")
			time.Sleep(3 * time.Second)
		}
	}

	// Step E: Check the DLQ
	fmt.Println("\nchecking dead letter queue...")
	dlqResult2, err := client.ReceiveMessage(context.TODO(), &sqs.ReceiveMessageInput{
		QueueUrl:            &dlqURL,
		MaxNumberOfMessages: 1,
		WaitTimeSeconds:     5,
	})
	if err != nil {
		log.Fatal("DLQ receive error:", err)
	}
	if len(dlqResult2.Messages) > 0 {
		fmt.Printf("DLQ has message: %s\n", *dlqResult2.Messages[0].Body)
		fmt.Println("message moved to DLQ after 3 failed attempts!")
	} else {
		fmt.Println("DLQ is empty (message might still be in transit — try again in a few seconds)")
	}
}

Here’s what’s happening. The redrive policy tells SQS: “If a message is received 3 times without being deleted, move it to the dead letter queue.” The maxReceiveCount of 3 means the message gets 3 chances before being sent to the DLQ.

We set VisibilityTimeout: 1 on the receive calls so the message comes back after just 1 second instead of the default 30. This is only for testing. In production, you’d use a longer timeout that matches how long your processing takes.

Note: In practice, the message might not appear in the DLQ immediately. SQS processes redrive policies asynchronously. If the DLQ check comes back empty, wait a few seconds and check again with: aws sqs receive-message --queue-url YOUR_DLQ_URL

Run it:

go run main.go

Expected output:

DLQ created: https://sqs.us-east-1.amazonaws.com/123456789/deploy-notifications-dlq
DLQ ARN: arn:aws:sqs:us-east-1:123456789:deploy-notifications-dlq
main queue created: https://sqs.us-east-1.amazonaws.com/123456789/deploy-notifications-v2

sent a test message
attempt 1: received 'this message will fail processing 3 times' — NOT deleting (simulating failure)
attempt 2: received 'this message will fail processing 3 times' — NOT deleting (simulating failure)
attempt 3: received 'this message will fail processing 3 times' — NOT deleting (simulating failure)

checking dead letter queue...
DLQ has message: this message will fail processing 3 times
message moved to DLQ after 3 failed attempts!

In production, you’d set up a CloudWatch alarm on the DLQ message count so you get notified when messages are failing.

Cleanup

aws sqs delete-queue --queue-url YOUR_MAIN_QUEUE_URL
aws sqs delete-queue --queue-url YOUR_DLQ_URL
aws sqs delete-queue --queue-url YOUR_V1_QUEUE_URL

Or delete by name using the AWS CLI:

# Get URLs and delete
for q in deploy-notifications deploy-notifications-v2 deploy-notifications-dlq; do
  url=$(aws sqs get-queue-url --queue-name $q --query QueueUrl --output text 2>/dev/null)
  if [ -n "$url" ]; then
    aws sqs delete-queue --queue-url "$url"
    echo "deleted $q"
  fi
done

What We Built

Starting from a single send/receive, we incrementally built:

  1. Queue creation and basic send: the SQS “hello world”
  2. The visibility timeout trap: messages come back if you don’t delete them
  3. Receive-process-delete: the correct SQS consumer pattern
  4. Batch sending: 10x fewer API calls for bulk messages
  5. Long polling loop: a production-ready consumer with graceful shutdown
  6. Dead letter queue: automatic handling of messages that keep failing

Every SQS consumer in production uses the pattern from Step 5: long poll, process in a batch, delete after success, and let the DLQ catch anything that keeps failing.

Next Steps

This covers the fundamentals. In a real project, you’d add:

  • FIFO queues for strict ordering (append .fifo to the queue name)
  • Message attributes for metadata (like message type or priority)
  • Lambda triggers so SQS invokes your function automatically, no polling loop needed
  • Batch delete with DeleteMessageBatch to delete up to 10 messages in one call

Check out Building a Go Lambda Function to see how to trigger Lambda from SQS, Go + DynamoDB CRUD to store processed messages in a database, or Terraform From Scratch to provision SQS queues as code.

0

Cheat Sheet

Copy-paste reference for Go + SQS.

Setup:

cfg, _ := config.LoadDefaultConfig(context.TODO())
client := sqs.NewFromConfig(cfg)

Create a queue:

result, _ := client.CreateQueue(ctx, &sqs.CreateQueueInput{QueueName: aws.String("my-queue")})
queueURL := *result.QueueUrl

Get queue URL by name:

result, _ := client.GetQueueUrl(ctx, &sqs.GetQueueUrlInput{QueueName: aws.String("my-queue")})
queueURL := *result.QueueUrl

Send one message:

client.SendMessage(ctx, &sqs.SendMessageInput{
    QueueUrl:    &queueURL,
    MessageBody: aws.String("your message here"),
})

Send a batch (up to 10):

client.SendMessageBatch(ctx, &sqs.SendMessageBatchInput{
    QueueUrl: &queueURL,
    Entries: []types.SendMessageBatchRequestEntry{
        {Id: aws.String("1"), MessageBody: aws.String("msg 1")},
        {Id: aws.String("2"), MessageBody: aws.String("msg 2")},
    },
})

Receive + delete (the correct pattern):

result, _ := client.ReceiveMessage(ctx, &sqs.ReceiveMessageInput{
    QueueUrl:            &queueURL,
    MaxNumberOfMessages: 10,
    WaitTimeSeconds:     20,  // long polling
})
for _, msg := range result.Messages {
    // process msg.Body
    client.DeleteMessage(ctx, &sqs.DeleteMessageInput{
        QueueUrl:      &queueURL,
        ReceiptHandle: msg.ReceiptHandle,
    })
}

DLQ redrive policy (JSON):

policy, _ := json.Marshal(map[string]string{
    "deadLetterTargetArn": "arn:aws:sqs:REGION:ACCOUNT:queue-dlq",
    "maxReceiveCount":     "3",
})

Key rules to remember:

  • SQS operations use queue URLs, not queue names. Get the URL first with CreateQueue or GetQueueUrl
  • Messages are not deleted when you receive them. You must call DeleteMessage with the ReceiptHandle
  • Default visibility timeout is 30 seconds. Set it longer than your processing time
  • WaitTimeSeconds: 20 enables long polling. Always use this in production to reduce costs
  • MaxNumberOfMessages max is 10. SQS won’t return more than 10 per receive call
  • Batch send limit is 10 messages or 256KB total, whichever comes first
  • ReceiptHandle changes every time a message is received. You can’t reuse old ones
  • Dead letter queues catch messages after maxReceiveCount failed receives. Always set one up

Similar Articles

More from devops

No related topic suggestions found.