Go Channels: Buffered vs Unbuffered, Select, and Production Patterns
Channels are the backbone of Go's concurrency model. Goroutines are cheap to create — channels are how you make them talk to each other without shared memory and the race conditions that come with it.
But here's the thing most tutorials skip: the choice between buffered and unbuffered channels isn't just about performance. It's a semantic decision. An unbuffered channel is a synchronization point. A buffered channel is a queue. Getting that distinction wrong produces code that either deadlocks silently or leaks goroutines under load.
This guide covers both types from first principles, explains how select actually behaves (it's not what most people assume), and walks through six patterns that appear repeatedly in production Go services.
What a Channel Actually Is
A channel is a typed, goroutine-safe conduit for passing values. That's it. Under the hood it's a circular buffer, a mutex, and two wait queues (one for blocked senders, one for blocked receivers).
ch := make(chan int) // unbuffered
ch := make(chan int, 10) // buffered, capacity 10
Two rules that never change:
- Sending to a closed channel panics.
- Receiving from a closed channel returns the zero value immediately (plus
falseas the second return if you check it).
Keep those in your head — they explain most channel bugs.
Unbuffered Channels
An unbuffered channel has zero capacity. Every send blocks until a receiver is ready. Every receive blocks until a sender is ready. This makes the channel a synchronization point: both goroutines meet at the channel for a handoff.
func main() {
ch := make(chan string)
go func() {
fmt.Println("worker: doing work")
ch <- "done" // blocks until main receives
}()
result := <-ch // blocks until worker sends
fmt.Println("main received:", result)
}
The practical implication: if you send on an unbuffered channel and nobody is receiving, the sending goroutine parks. Forever, if no receiver ever arrives. That's a goroutine leak.
// Goroutine leak — don't do this
func leaky() {
ch := make(chan int)
go func() {
ch <- 42 // parks here forever if nobody reads
}()
// function returns without reading from ch
}
When to use unbuffered channels:
- When you need to confirm the receiver got the value before moving on
- For signalling (done patterns, stop patterns)
- When you want the sender to wait until work is acknowledged
Buffered Channels
A buffered channel has capacity. Sends block only when the buffer is full. Receives block only when the buffer is empty. The sender and receiver can run at different speeds up to the buffer limit.
ch := make(chan int, 3) // capacity 3
ch <- 1 // doesn't block — buffer has room
ch <- 2 // doesn't block
ch <- 3 // doesn't block
ch <- 4 // blocks — buffer full, no receiver ready
A common use: absorbing bursts in a producer that generates work faster than consumers can process it.
func produce(ch chan<- Job, jobs []Job) {
defer close(ch)
for _, j := range jobs {
ch <- j // only blocks when consumers fall too far behind
}
}
func consume(ch <-chan Job) {
for job := range ch {
process(job)
}
}
func main() {
ch := make(chan Job, 50) // 50-item buffer smooths bursts
go produce(ch, loadJobs())
consume(ch)
}
When to use buffered channels:
- When producers and consumers run at different speeds
- To reduce goroutine blocking in high-throughput pipelines
- As a semaphore (a channel of fixed capacity used to limit concurrency — more on this below)
Buffered vs Unbuffered: The Decision Table
| Unbuffered | Buffered | |
|---|---|---|
| Capacity | 0 | N (you set it) |
| Send blocks when | No receiver ready | Buffer full |
| Receive blocks when | No sender ready | Buffer empty |
| Guarantees | Receiver acknowledged | Item delivered to queue |
| Mental model | Synchronization point | Queue / work list |
| Goroutine leak risk | Higher (sender parks easily) | Lower (sender parks only on full buffer) |
| Right for | Signalling, handoffs, done patterns | Pipelines, worker queues, burst absorption |
The sizing question for buffered channels: when in doubt, start with the number of items you expect in flight at peak. Benchmark before tuning. A buffer that's too small eliminates the benefit; a buffer that's too large masks backpressure problems.
Directional Channels
Go lets you constrain a channel to send-only or receive-only at the type level. This isn't just documentation — the compiler enforces it.
func generator(out chan<- int) { // send-only: can't receive from out
defer close(out)
for i := 0; i < 5; i++ {
out <- i
}
}
func printer(in <-chan int) { // receive-only: can't send to in
for v := range in {
fmt.Println(v)
}
}
func main() {
ch := make(chan int, 5) // bidirectional at creation
go generator(ch) // implicitly converts to chan<- int
printer(ch) // implicitly converts to <-chan int
}
Use directional types on every function signature that touches a channel. It prevents accidental closes from the wrong side and makes ownership explicit at a glance.
The select Statement
select lets a goroutine wait on multiple channel operations simultaneously. It picks whichever case is ready. If multiple cases are ready at the same time, Go picks one at random — not the first one listed.
select {
case v := <-ch1:
fmt.Println("received from ch1:", v)
case v := <-ch2:
fmt.Println("received from ch2:", v)
}
If no case is ready, select blocks. Add a default case to make it non-blocking:
select {
case v := <-ch:
fmt.Println("got:", v)
default:
fmt.Println("nothing ready, moving on")
}
The randomness matters. Don't write code that assumes channel cases in select execute in order — they don't.
Production Patterns
1. Timeout with select
The most common select pattern in production code: give an operation a deadline without reaching for context when the scope is local.
func fetchWithTimeout(url string) (string, error) {
resultCh := make(chan string, 1)
go func() {
// simulate HTTP fetch
time.Sleep(200 * time.Millisecond)
resultCh <- "response body"
}()
select {
case result := <-resultCh:
return result, nil
case <-time.After(500 * time.Millisecond):
return "", fmt.Errorf("fetch timed out")
}
}
Note the buffered resultCh with capacity 1. If the timeout fires first, the goroutine still finishes and sends its result — but since the channel has a slot, it doesn't leak. Without the buffer, the goroutine would park on resultCh <- forever after the caller returned.
For anything that crosses function or package boundaries, use context.WithTimeout instead. This local pattern is for contained operations where you own both sides.
2. Done Channel (Cancellation Signal)
Before context existed, done channels were how Go codebases signalled shutdown. You'll still see this pattern in low-level code and in context's own implementation.
func worker(jobs <-chan Job, done <-chan struct{}) {
for {
select {
case <-done:
fmt.Println("worker shutting down")
return
case job, ok := <-jobs:
if !ok {
return // jobs channel closed
}
process(job)
}
}
}
func main() {
jobs := make(chan Job, 10)
done := make(chan struct{})
go worker(jobs, done)
// ... send jobs ...
close(done) // broadcast shutdown to all goroutines listening on done
}
close(done) broadcasts to every goroutine waiting on <-done simultaneously. Sending a value only wakes one receiver; closing wakes all of them. That's why done channels are always chan struct{} (zero-size, just a signal) and always closed rather than sent on.
For new code, prefer context.Context — it composes better. Use the done channel pattern when you need a lightweight broadcast with no external dependencies.
3. nil Channel to Disable a select Case
A nil channel blocks forever. Inside select, a nil channel case is never selected. This lets you disable cases dynamically without restructuring your select.
func merge(ch1, ch2 <-chan int) <-chan int {
out := make(chan int)
go func() {
defer close(out)
for ch1 != nil || ch2 != nil {
select {
case v, ok := <-ch1:
if !ok {
ch1 = nil // disable this case; ch2 still active
continue
}
out <- v
case v, ok := <-ch2:
if !ok {
ch2 = nil // disable this case
continue
}
out <- v
}
}
}()
return out
}
Without this trick, a closed channel would be selected on every iteration (it always returns the zero value immediately), flooding your select loop. Setting it to nil cleanly removes it from consideration.
4. Semaphore via Buffered Channel
A buffered channel of fixed capacity acts as a counting semaphore — it limits how many goroutines run a section of code concurrently without importing sync or golang.org/x/sync/semaphore.
var sem = make(chan struct{}, 10) // max 10 concurrent operations
func limitedFetch(url string) {
sem <- struct{}{} // acquire: blocks if 10 are already running
defer func() { <-sem }() // release
// only 10 of these run at a time
resp, err := http.Get(url)
_ = resp
_ = err
}
This is lightweight and idiomatic for simple cases. For more complex semaphore behavior (weighted, prioritized), use golang.org/x/sync/semaphore.
5. or-done: Safe Draining Across Pipelines
In a pipeline, you often need to drain a channel while also respecting a ctx.Done() signal. Wrapping that pattern in an orDone helper keeps pipeline stage code clean.
func orDone(ctx context.Context, ch <-chan int) <-chan int {
out := make(chan int)
go func() {
defer close(out)
for {
select {
case <-ctx.Done():
return
case v, ok := <-ch:
if !ok {
return
}
select {
case out <- v:
case <-ctx.Done():
return
}
}
}
}()
return out
}
Usage in a pipeline stage becomes a clean range loop:
func process(ctx context.Context, in <-chan int) <-chan string {
out := make(chan string)
go func() {
defer close(out)
for v := range orDone(ctx, in) {
out <- fmt.Sprintf("processed: %d", v)
}
}()
return out
}
6. Channel Ownership — The Rule That Prevents Most Bugs
This isn't a named pattern, but it's the rule that underlies all the patterns above:
Only the goroutine that creates a channel should close it.
Receivers must never close a channel they didn't create. Closing from the receiver side causes a panic when the sender tries to send after the close. Closing from multiple senders causes a panic when the second one closes an already-closed channel.
If multiple goroutines write to a channel and you need to close it when all of them finish, use a sync.WaitGroup:
func fanIn(sources ...<-chan int) <-chan int {
out := make(chan int)
var wg sync.WaitGroup
for _, src := range sources {
wg.Add(1)
go func(s <-chan int) {
defer wg.Done()
for v := range s {
out <- v
}
}(src)
}
// A single goroutine owns the close — nobody else touches it
go func() {
wg.Wait()
close(out)
}()
return out
}
Single creator, single closer. Everything else is a receiver.
Common Mistakes
Ranging over a channel nobody closes. for v := range ch blocks forever when the channel has no more data but hasn't been closed. Always close the channel from the sender side when all values have been sent.
Sending on a closed channel. This panics. If multiple goroutines write to a shared channel, coordinate the close with a WaitGroup (see pattern 6 above) rather than trying to close from individual senders.
Unbuffered channel with no goroutine. This deadlocks immediately:
ch := make(chan int)
ch <- 42 // deadlock: nobody will ever receive
Assuming select priority. If two cases in a select are ready simultaneously, Go picks one at random. Never write logic that depends on a particular case firing first.
Zero-value channel (nil) used accidentally. A channel declared but never initialized is nil. Operations on a nil channel block forever. Always make your channels before use.
Closing Thoughts
The unbuffered/buffered distinction is simpler than most tutorials make it: unbuffered channels synchronize; buffered channels decouple. select makes a goroutine wait on multiple operations with one clean statement. And channel ownership — one creator, one closer — prevents the category of panics that trip up almost everyone writing Go concurrency for the first time.
The six patterns in this guide — timeout with select, done channels, nil disabling, semaphore, or-done, and ownership discipline — cover the majority of channel usage in production Go services. Once they become muscle memory, channels stop being a source of bugs and start being a genuine design tool.
FAQ
Can you receive from an unbuffered channel in the same goroutine that sends?
No. The send blocks waiting for a receiver, so the goroutine never reaches the receive. This causes a deadlock. Sends and receives on the same goroutine must always be on buffered channels with available capacity, or the operation will deadlock.
What happens when you range over a closed channel?
for v := range ch exits cleanly when the channel is closed and drained. After close, the channel returns all remaining buffered values first, then returns the zero value with ok = false, and range exits. This is the idiomatic way to drain a channel completely.
When should I use context instead of a done channel?
Use context.Context for anything that crosses package or API boundaries — HTTP handlers, database calls, gRPC methods. Use a done channel when you own both sides of the communication and want a lightweight, dependency-free cancel signal. In practice, most new code should use context.
Is it safe to close a channel from multiple goroutines?
No. Closing an already-closed channel panics. Only one goroutine should close a given channel. Coordinate multiple writers with a sync.WaitGroup and have a single goroutine perform the close after all writers finish.
How do I choose buffer size?
Start with the number of items you expect in flight at peak load, benchmark under realistic conditions, and adjust. A buffer of 1 is often enough to prevent a sender from blocking while the receiver processes the previous value. Avoid very large buffers that hide backpressure — if your buffer is growing unboundedly, the real fix is a faster consumer or more consumers, not a bigger buffer.
Does select block if all channels are nil?
Yes. A select with only nil channels (or no cases at all) blocks forever. If you have a default case, it fires instead. This is occasionally useful for deliberate "park this goroutine indefinitely" behavior, but in most cases it indicates a bug where a channel was never initialized.
Member discussion