The Go memory model specifies two main ways in channels are used for synchronization:

  1. A send onto a channel happens-before the corresponding receive from that channel completes.
  2. The $k^{th}$ receive from a channel with capacity $C$ happens-before the $(k+C)^{th}$ send onto that channel completes.

Recall that happens-before is a mathematical relation, as discussed here.

Rule (1) above has been around for a while. The rule is very similar to what was originally proposed by Lamport in 1978. It establishes a happens-before relation between a sender and its corresponding receiver. Rule (2) is a bit more esoteric. It was not present in Lamport’s study of distributed systems. There is a good reason for that absence.

Go is a language in between concurrency and distribution.

Both concurrency and distribution speak of independent agents cooperating on a common task. For that to happen, agents need to coordinate, to synchronize. Although similar in many ways, concurrency and distribution are fundamentally different. Because of this difference, synchronization in the setting of distribution differs from synchronization for concurrency.

In a concurrent systems, we assume that the agents are under/within a single environment. In Go, for example, all agents (goroutines) are under a single umbrella, in this case the Go runtime. This overarching environment allows us to assume that no messages are lost during transmission.

In distributed system, however, there is no such point of authority—at least not without making lots of extra assumptions about the system. For example, it may be impossible to tell whether a message was received. A network delay may be indistinguishable from a crashed/failed node. This impossibility exist even if we label some node as the “authoritative source of information about the state of the system.” After all, what if we are unable to reach this special node? In a distributed system, communication is no longer perfect, and we are forced to deal with this fact at some point.

Locks are often used to program concurrent systems, where the agents are located under a central resource manager. This manager can be the operating system, or a language runtime with the help of the OS. Different from locks, channels are a step towards synchronization in the setting of distribution.

Go borrowed rule (1) from Lamport’s research on distribution. On the other hand, rule (2) comes from the realization that Go is not all the way there. Rule (2) allows for the use of channels as locks, with send acting as acquire and receive as release (see previous post for details):

  T0            T1
c <- 0     |  c <- 0
z := 42    |  z := 43
<- c       |  <- c

Rule (1) gives us an order, while rule (2) is related to mutual exclusion (an order exists, but we don’t know which). In a sense, rule (1) is constructive or intuitionistic, while rule (2) is classical. If you are interested, you can find more on Section 3.5 of our paper Ready, set, Go! Data-race detection and the Go language.


While channels are typically used to program distributed systems, Go has a slightly different angle on message passing. Go introduces rule (2), which takes into account the channels’ capacity:

  1. The $k^{th}$ receive from a channel with capacity $C$ happens-before the $(k+C)^{th}$ send onto that channel completes.

With rule (2), we can program channels as locks. This puts the language in the spectrum between concurrency and distribution.