Go Toolchain Primer

A toolchain is a package composed of the compiler and ancillary tools, libraries and runtime for a language which together allow you to build and run code written in that language. The GNU toolchain is the most commonly used toolchain on Linux and allows building programs written C, C++, Fortran and a host of other languages too.

gc

The first Go toolchain to be made available, and the one most people are referring to when they talk about Go, is gc.  gc, (which is not to be confused with GC, which usually refers to the garbage collector) is the compiler and toolchain which evolved from the Plan 9 toolchain and includes its own compiler, assembler, linker and tools, as well as the Go runtime and standard library. With Go 1.5 the parts of the toolchain that were written in C have been rewritten in Go so the Plan 9 legacy is gone in terms of code but remains in spirit. The toolchain supports i386, x86_64, arm, arm64 and powerpc64 and the code is BSD licensed.

gccgo

gccgo extends the gcc project to support Go. gcc is widely used compiler that along with GNU binutils for the linker and assembler supports a large number of processor architectures. gccgo currently only supports Go 1.4, but Go 1.5 support is in the works. The advantage of being able to use the gcc compiler infrastructure is that as well as supporting more processor architectures than gc, gccgo can take advantage of the more advanced middle-end and backend optimizations that gcc has developed over the years which could lead to faster generated code. The GNU toolchain and gccgo are GPLv3 licensed which some people may find problematic, but it is what many of the Linux distributions use to support Go on architectures not supported by gc like SPARC, S/390 or MIPS.

llgo

LLVM is a compiler infrastructure project similar to gcc, with a couple of key differences. Firstly it was developed from the ground up in C++ at a much later date than gcc so the code is generally more modern in style and has a clearer structure. Secondly it is BSD licensed, which has attracted a number of large companies such as Apple and Google to get heavily involved in it (in fact Apple employs the project’s founder). llgo is a Go compiler built on top of the LLVM compiler infrastructure. It is BSD licensed and supports nearly as many architectures as gccgo but feels like a less mature project and fewer people seem to be using it, at least publically.

Why so many toolchains?

One thing that may appear as odd is that all three toolchains are predominantly developed by Google engineers. All the toolchains contain some of the same components – the runtime and libraries are largely shared between all the projects, gccgo and llgo share a compiler frontend (language parser) and all the compilers are similarly ahead of time and generate native code. Perhaps Google feels like diversity of implementations is good for the language – I would be inclined to agree – and it looks like Google is spending their efforts relatively sparingly on gccgo and llgo so the development cost of that strategy may not be that high.

I would suggest most people should just stick with gc for their Go development needs, but it will be interesting to see in which directions the various toolchains develop. In later posts I will go into a bit more depth about the relative pros and cons of each.

Channel buffering in Go

Go provides channels as a core concurrency primitive. Channels can be used to pass objects between goroutines safely and can be used to construct quite complex concurrent data structures, however they are still quite a low-level construct and it is not always clear how to use them best.

I was reminded of this when I saw Evan Huus’s excellent talk on Complex Concurrency Patterns in Go at the Golang UK conference. One of the points he made was that on his team infinite buffering was considered a “code smell”. This is something I find interesting, particularly as someone who has written a bit of Erlang code. In the Erlang concurrency model there are no channels but every process (processes in Erlang are lightweight, like goroutines) has an input message queue, which is effectively unbounded in size. In Go channels are first class objects and must provide a finite buffer size on construction, the default value of which is zero.

On the face of it the Go approach is appealing. Nobody wants runaway resource allocation in their program and Go provides a useful way of, for example, throttling excessively fast producers in a producer-consumer system to prevent them filling up memory with unprocessed data. But how large should you make channel buffers?

Unbuffered channels are the default in Go but can have some undesirable side effects. For example a producer and consumer connected by an unbuffered channel can cause reduced parallelism as the producer blocks waiting for the consumer to finish working and the consumer blocks waiting for the producer to produce output, for example in the following code increasing the channel buffer from zero to one will cause a speedup, at least on my machine:

package main

import (
	"fmt"
	"math/rand"
	"time"
)

func producer(ch chan bool) {
	for i := 0; i < 100000; i++ {
		time.Sleep(time.Duration(10 * rand.Float64() * float64(time.Microsecond)))
		ch <- true
	}
	close(ch)
}

func consumer(ch chan bool) {
	for _ = range ch {
		time.Sleep(time.Duration(10 * rand.Float64() * float64(time.Microsecond)))
	}
}

func unbuffered() {
	ch := make(chan bool, 0)
	go producer(ch)
	consumer(ch)
}

func buffered() {
	ch := make(chan bool, 1)
	go producer(ch)
	consumer(ch)
}

func main() {
	startTime := time.Now()
	unbuffered()
	fmt.Printf("unbuffered: %vn", time.Since(startTime))
	startTime = time.Now()
	buffered()
	fmt.Printf("buffered: %vn", time.Since(startTime))
}

Unbuffered channels are also more prone to deadlock. It can be argued whether this is a good thing or bad thing – deadlocks that may not be apparent with buffered channels become visible with unbuffered channels which allows them to be fixed and the behaviour of an unbuffered system is generally simpler and easier to reason about.

So if we decide we need to create a buffered channel, how large should that buffer be? That question turns out to be pretty hard to answer. Channel buffers are allocated with malloc at when the channel is created, so the buffer size is not an upper bound on the size of the buffer but the actual allocated size – the larger the buffer size the more system memory it will consume and the worse cache locality it will have. This means we can’t just use a very large number and treat the channel as if it was infinitely buffered.

In the example below, which is inspired by an open source project, we have a logging API that internally has a channel that is used to pass log messages from the API entry point to the log backend which writes the log messages to a file or network based on the configuration:

type Logger struct {
	logChan chan string
}

func NewLogger() *Logger {
	return &Logger{logChan: make(chan string, 100)}
}

func (l *Logger) WriteString(str string) {
	l.logChan <- str
}

In this case the channel is providing two things. Firstly, it is providing synchronization so multiple goroutines can call into the API safely at the same time. This is a fairly uncontentious use of a channel. Secondly, it is providing a buffer to prevent callers of the API from blocking unduly when many logging calls are made around the same time – it won’t increase throughput but will help if logging is bursty. But is this a good use of a channel? And is 100 the right size to use?

The two questions are somewhat intertwined in my opinion. Channels make good buffers for small buffer sizes. As mentioned above, the storage for a channel is statically allocated so a large buffer can more space efficiently be implemented in other ways. The right size of a buffer depends on many things, in this example the number of goroutines logging, how bursty the logging is, the tolerance for blocking, the size of the machine memory and probably many others. For this reason the API should provide a way of setting the size of the channel buffer so users can adapt it to their needs rather than hardcoding the size.

So a good size for a buffer could be zero, one or some other small number – say five or less – or a variable size that can be set by the API but it is unlikely, in my opinion, to be a large constant value like 100. Infinite buffer sizes, that is a buffer only limited by available memory, are sometimes useful and not something we should dismiss altogether although they are obviously not going to be possible to implement directly with channels as provided by Go. It is possible to create a buffer goroutine with an input channel and an output channel that can implement a memory efficient variable-sized buffer, for example as Evan does here and this is the best choice if you expect your buffering requirements to be large.