It's not at "an accident" and Go didn't "somehow" fail to replace C++ at its systems programming domain. The reason why go failed to replace C and C++ is not a mystery to anyone: Mandatory GC and a rather heavyweight runtime.
When the performance overhead of having a GC is less significant than the cognitive overhead of dealing with manual memory management (or the Rust borrow checker), Go was quite successful: Command line tools and network programs.
Around the time Go was released, it was certainly touted by its creators as a "systems programming language"[1] and a "replacement for C++"[2], but re-evaluating the Go team's claims, I think they didn't quite mean in the way most of us interpreted them.
1. The Go Team members were using "systems programming language" in a very wide sense, that include everything that is not scripting or web. I hate this defintion with passion, since it relies on nothing but pure elitism ("Systems language are languages that REAL programmers uses, unlike those "Scripting Languages"). Ironically, this usage seems to originate from John Ousterhout[3], who is himself famous for designing a scripting language (Tcl).
Ousterhout's definition of "system programming language" is: Designed to write applications from scratch (not just "glue code"), performant, strongly typed, designed for building data structures and algorithms from scratch, often provide higher-level facilities such as objects and threads.
Ousterhout's definition was outdated even back in 2009, when Go was released, let alone today. Some dynamic languages (such as Python with type hints or TypeScript) are more strongly typed than C or even Java (with its type erasure). Typing is optional, but so it is in Java (Object), and C (void*, casting). When we talk about the archetypical "strongly typed" language today we would refer to Haskell or Scala rather than C. Scripting languages like Python and JavaScript were already commonly used "for writing applications from scratch" back in 2009, and far from being ill-adapted for writing data structures and algorithms from scratch, Python became the most common language that universities are using for teaching data structures and algorithms! The most popular dynamic languages nowadays (Ruby, Python, JavaScript) all have objects, and 2 out of 3 (Python and Ruby) have threads (although GIL makes using threads problematic in the mainstream runtimes). The only real differentiator that remains is raw performance.
The widely accepted definition of a "systems language" today is "a language that can be used to systems software". Systems software are either operating systems or OS-adjacent software such as device drivers, debuggers, hypervisors or even complex beasts like a web browser. The closest software that Go can claim in this category is Docker, but Docker itself is just a complex wrapper around Linux kernel features such as namespaces and cgroups. The actual containerization is done by these features which are implemented in C.
During the first years of Go, the Go language team was confronted on golang-nuts by people who wanted to use go for writing systems software and they usually evaded directly answering these questions. When pressed, they would admit that Go is not ready for writing OS kernels, at least not now[4][5][6], but GC could be disabled if you want to[7] (of course, there isn't any way to free memory then, so it's kinda moot). Eventually, the team came to a conclusion that disabling GC is not meant for production use[8][9], but that was not apparent in the early days.
Eventually the references for "systems language" disappeared from Go's official homepage and one team member (Andrew Gerrand) even admitted this branding was a mistake[10].
In hindsight, I think the main "systems programming task" that Rob Pike and other members at the Go team envisioned was the main task that Google needed: writing highly concurrent server code.
2. The Go Team members sometimes mentioned replacing C and C++, but only in the context of specific pain points that made "programming in the large" cumbersome with C++: build speed, dependency management and different programmers using different subsets. I couldn't find any claim that go was meant as a general replacement for C and C++ anywhere from the Go Team, but the media and the wider programming community generally took Go as a replacement language for C and C++.
When you read through the lines, it becomes clear that the C++ replacement angle is more about Google than it is about Go. It seems that in 2009, Google was using C++ as the primary language for writing web servers. For the rest of the industry, Java was (and perhaps still is) the most popular language for this task, with some companies opting for dynamic languages like Python, PHP and Ruby where performance allowed.
Go was a great fit for high-concurrency servers, especially back in 2009. Dynamic languages were slower and lacked native support for concurrency (if you put aside Lua, which never got popular for server programming for other reasons). Some of these languages had threads, but these were unworkable due to GIL. The closest thing was frameworks Twisted, but they were fully asynchronous and quite hard to use.
Popular static languages like Java and C# were also inconvenient, but in a different way. Both of these languages were fully capable of writing high-performance servers, but they were not properly tuned for this use case by default. The common frameworks of the day (Spring, Java EE and ASP.net) introduced copious amounts of overhead, and the GC was optimized for high throughput, but it had very bad tail latency (GC pauses) and generally required large heap sizes to be efficient. Dependency management, build and deployment was also an afterthought. Java had Maven and Ivy and .Net had NuGet (in 2010) and MSBuild, but these where quite cumbersome to use. Deployment was quite messy, with different packaging methods (multiple JAR files with classpath, WAR files, EAR files) and making sure the runtime on the server is compatible with your application. Most enthusiasts and many startups just gave up on Java entirely.
The mass migration of dynamic language programmers to Go was surprising for the Go team, but in hindsight it's pretty obvious. They were concerned about performance, but didn't feel like they had a choice: Java was just too complex and Enterprisey for them, and eeking out performance out of Java was not an task easy either. Go, on the other hand, had the simplest deployment model (a single binary), no need for fine tuning and it had a lot of built-in tooling from day one ("gofmt", "godoc", "gotest", cross compilation) and other important tools ("govet", "goprof" and "goinstall" which was later broken into "go get" and "go install") were added within one year of its initial release.
The Go team did expect server programs to be the main use for Go and this is what they were targeting at Google. They just missed that the bulk of new servers outside of Google were being written in dynamic languages or Java.
The other "surprising use" of Go was for writing command-line utilities. I'm not sure if the original Go team were thinking about that, but it is also quite obvious in hindsight. Go was just so much easier to distribute than any alternative available at the time. Scripting languages like Python, Ruby or Perl had great libraries for writing CLI programs, but distributing your program along with its dependencies and making sure the runtime and dependencies match what you needed was practically impossible without essentially packaging your app for every single OS and distro out there or relying on the user to be a to install the correct version of Python or Ruby and then use gem or pip to install your package. Java and .NET had slow start times due to their VM, so they were horrible candidates even if you'd solve the dependency issues. So the best solution was usually C or C++ with either the "./configure && ./make install" pattern or making a static binary - both solutions were quite horrible. Go was a winner again: it produced fully static binaries by default and had easy-to-use cross compilation out of the box. Even creating a native package for Linux distros was a lot easier, so all you add to do is package a static binary.
I'm impressed. That's the most thorough and well-researched comment I've seen on Hackernews, ever. Thank you for taking the time and effort in writing it up.
It compares NuGet with Maven calling the former cumbersome. It's a tell of gaps in research made but also a showcase of overarching problem where C# is held back by people bundling it together with Java and the issues of its ecosystem (because NuGet is excellent and on par with Cargo crates).
NuGet was only released in 2010, so I wasn't really referring to it. I was referring to Maven (the build tool part, not the Maven/Ivy dependency management part, which was quite a breeze) the build system and MSBuild. Both of which required wrangling with verbose XML and understanding a lot of syntax (or letting the IDE spew out everything for you and then get totally lost when you need to fix something or go beyond what the IDE UI allows you to do). If anything, MSBuild was somewhat worse that Maven, since the documentation was quite bad, at least back then.
That being said, I'm not sure if you've used NuGet in its early days of existence, but I did, and it was not a fun experience. I remember that I the NuGet project used to get corrupted quite often and I had to reinstall everything (and back then, there was no lockfile if my memory serves me right, so you'd be getting different versions).
In terms of performance, ASP.NET (not ASP.NET Core) was as bad as contemporary Java EE frameworks, if not worse. You could make a high performance web server by targeting OWIN directly (like you could target the Servlet API with Java), but that came later.
I think you are the one who are bundling things together here: You are confusing the current C#/.Net Core ecosystem with the way it was back in the .Net 4.0/Visual Studio 2008-era. Windows-centric, very hard to automate through CLI, XML-obsessed and rather brittle tooling.
C# did have a lot of good points over Java back then (and certainly now): Less verbose language, better generics (no type erasure), lambda expressions, extensions methods, LINQ etc. Visual Studio was also a better IDE than Eclipse. I personally chose C# over Java at the time (when I could target Windows), but I'm not trying to hide the limits it had back then.
Fair enough. You are right and apologize for rather hasty comment. .NET in 2010 was a completely different beast and an unlikely choice in the context. It would be good for the industry if the perception of that past was not extrapolated onto the current state of affairs.
> Systems software are either operating systems or OS-adjacent software such as device drivers, debuggers, hypervisors or even complex beasts like a web browser. The closest software that Go can claim in this category is Docker, but Docker itself is just a complex wrapper around Linux kernel features such as namespaces and cgroups. The actual containerization is done by these features which are implemented in C.
Android GPU debugger, USB Armory bare metal unikernel firmware, Go compiler, Go linker, bare metal on maker boards like Arduino and ESP32
> Popular static languages like Java and C# were also inconvenient, but in a different way. Both of these languages were fully capable of writing high-performance servers, but they were not properly tuned for this use case by default. The common frameworks of the day (Spring, Java EE and ASP.net) introduced copious amounts of overhead, and the GC was optimized for high throughput, but it had very bad tail latency (GC pauses) and generally required large heap sizes to be efficient. Dependency management, build and deployment was also an afterthought. Java had Maven and Ivy and .Net had NuGet (in 2010) and MSBuild, but these where quite cumbersome to use. Deployment was quite messy, with different packaging methods (multiple JAR files with classpath, WAR files, EAR files) and making sure the runtime on the server is compatible with your application. Most enthusiasts and many startups just gave up on Java entirely.
Usually a problem only for those that refuse to actually learn about Java and .NET ecosystems.
Still doing great after 25 years, now being copied with the VC ideas to sponsor Kubernetes + WASM selling startups.
Unfortunately I have to quibble a bit, although bravo for such a high effort post.
> When you read through the lines, it becomes clear that the C++ replacement angle is more about Google than it is about Go. It seems that in 2009, Google was using C++ as the primary language for writing web servers
I worked at Google from 2006-2014 and I wouldn't agree with this characterisation, nor actually with many of the things Rob Pike says in his talk.
In 2009 most Google web servers (by unique codebase I mean, not replica count) were written in Java. A few of the oldest web servers were written in C++ like web search and Maps. C++ still dominated infrastructure servers like BigTable. However, most web frontends were written in Java, for example, the Gmail and Accounts frontends were written in Java but the spam filter was written in C++.
Rob's talk is frankly somewhat weird to read as a result. He claims to have been solving Big Problems that only Google had, but AFAIK nobody in Google's senior management asked him to do Go despite a heavy investment in infrastructure. Java and C++ were working fine at the time and issues like build times were essentially solved by Blaze (a.k.a. Bazel) combined with a truly huge build cluster. Blaze is a command line written in ... drumroll ... Java (with a bit of C++ iirc).
Rob also makes the very strange claim that Google wasn't using threads in its software stack, or that threads were outright banned. That doesn't match my memory at all. Google servers were all heavily multi-threaded and async at that time, and every server exposed a /threadz URL on its management port that would show you the stack traces of every thread (in both C++ and Java). I have clear memories of debugging race conditions in servers there, well before Go existed.
> The common frameworks of the day (Spring, Java EE and ASP.net) introduced copious amounts of overhead, and the GC was optimized for high throughput, but it had very bad tail latency (GC pauses) and generally required large heap sizes to be efficient. Dependency management, build and deployment was also an afterthought.
Google didn't use any of those frameworks. It also didn't use regular Java build systems or dependency management.
At the time Go was developed Java had both the throughput-optimized parallel GC, and also the latency optimized CMS collector. Two years after Go was developed Java introduced the G1 GC which made the tradeoff more configurable.
I was on-call for Java servers at Google at various points. I don't remember GC being a major issue even back then (and nowadays modern Java GC is far better than Go's). It was sometimes a minor issue requiring tuning to get the best performance out of the hardware. I do remember JITC being a problem because some server codebases were so large that they warmed up too slowly, and this would cause request timeouts when hitting new servers that had just started up, so some products needed convoluted workarounds like pre-warming before answering healthy to the load balancer.
Overall, the story told by Rob Pike about Go's design criteria doesn't match my own recollection of what Google was actually doing. The main project Pike was known for at Google in that era was Sawzall, a Go-like language designed specifically for logs processing, which Google has phased out years ago (except in one last project where it's used for scripting purposes and where I heard the team has now taken over maintenance of the Sawzall runtime, that project was written by me, lol sorry guys). So maybe his primary experience of Google was actually writing languages for batch jobs rather than web servers and this explains his divergent views about what was common practice back then?
I agree with your assessment of Go's success outside of Google.
This. I worked at Google around the same time. Adwords and Gmail were customers of my team.
I remember appreciating how much nicer it was to run Java servers, because best practice for C++ was (and presumably still is) to immediately abort the entire process any time an invariant was broken. This meant that it wasn't uncommon to experience queries of death that would trivially shoot down entire clusters. With Java, on the other hand, you'd just abort the specific request and keep chugging.
I didn't really see any appreciable attrition to golang from Java during my time at Google. Similarly, at my last job, the majority of work in golang was from people transitioning from ruby. I later learned a common reason to choose golang over Java was over confusion about the Java tooling / development workflow. For example, folks coming from ruby would often debug with log statements and process restarts instead of just using a debugger and hot patching code.
Yeah. C++ has exceptions but using them in combination with manual memory management is nearly impossible, despite RAII making it appear like it should be a reasonable thing to do. I was immediately burned by this the first time I wrote a codebase that combined C++ and exceptions, ugh, never again. Pretty sure I never encountered a C++ codebase that didn't ban exceptions by policy and rely on error codes instead.
This very C oriented mindset can be seen in Go's design too, even though Go has GC. I worked with a company using Go once where I was getting 500s from their servers when trying to use their API, and couldn't figure out why. I asked them to check their logs to tell me what was going wrong. They told me their logs didn't have any information about it, because the error code being logged only reflected that something had gone wrong somewhere inside a giant library and there were no stack traces to pinpoint the issue. Their suggested solution: just keep trying random things until you figure it out.
That was an immediate and visceral reminder of the value of exceptions, and by implication, GC.
> In 2009 most Google web servers (by unique codebase I mean, not replica count) were written in Java. A few of the oldest web servers were written in C++ like web search and Maps. C++ still dominated infrastructure servers like BigTable. However, most web frontends were written in Java, for example, the Gmail and Accounts frontends were written in Java but the spam filter was written in C++.
Thank you. I don't much about the breakup of different services by language in Google circa 2009, so your feedback helps me put things in focus. I knew that Java was more popular than the way Rob described it (in his 2012 talk[1], not this essay), but I didn't know how much.
I would still argue that like replacing C and C++ in server code was the main impetus for developing Go. This would be a rather strange impetus outside of big tech company like Google, which was writing a lot of C++ server code to begin with. But it also seems that Go was developed quite independently of Google's own problems.
> Rob also makes the very strange claim that Google wasn't using threads in its software stack, or that threads were outright banned. That doesn't match my memory at all.
I can't say anything about Google, but I also found that statement baffling. If you wanted to develop a scalable network server in Java at that time, you pretty much had to use threads. With C++ you had a few other alternatives (you could develop a single threaded server using an asynchronous library Boost ASIO for instance), but that was probably harder than dealing with deadlocks race conditions (which are still very much a problem in Go, the same way they are in multi-threaded C++ and Java).
> Google didn't use any of those frameworks. It also didn't use regular Java build systems or dependency management.
Yes, I am aware of that part, and it makes it clearer for me Go wasn't trying to solve any particular problem with the way Java was used within Google. I also think Go win over many experienced Java developers who already knew how to deal with Java. But it did offer a simpler build-deployment-and-configuration story than Java, and that's why it attracted many Python and Node.js where Java failed to do so.
Many commentators have mentioned better performance and fewer errors with static typing as the main attraction for dynamic language programmers coming to Go, but that cannot be the only reason, since Java had both of these long before Go came to being.
> At the time Go was developed Java had both the throughput-optimized parallel GC, and also the latency optimized CMS collector. Two years after Go was developed Java introduced the G1 GC which made the tradeoff more configurable.
Frankly speaking, GC was more minor problem for people coming from dynamic language. But the main issue for this type of developer, is that the GC in Java is configurable. In practice most of the developers I've worked with (even seasoned Java developers) do not know how to configure and benchmark Java GC, which is quite an issue.
JVM Warmup was and still is a major issue in Java. New features like AppCDS help a lot to solve this issue, but it requires some knowledge, understanding and work. Go solves that out of the box, by foregoing JIT (Of course, it loses other important optimizations that JIT natively enables like monomorphic dispatch).
The Google codebase had the delightful combination of both heavily async callback oriented APIs and also heavy use of multithreading. Not surprising for a company for whom software performance was an existential problem.
The core libraries were not only multi-threaded, but threaded in such a way that there was no way to shut them down cleanly. I was rather surprised when I first learned this fact during initial training, but the rationale made perfect sense: clean shutdown in heavily threaded code is hard and can introduce a lot of synchronization bugs, but Google software was all designed on the assumption that the whole machine might die at any moment. So why bother with clean shutdown when you had to support unclean shutdown anyway. Might as well just SIGKILL things when you're done with them.
And by core libraries I mean things like the RPC library, without which you couldn't do anything at all. So that I think shows the extent to which threading was not banned at Google.
This principle (always shutdown uncleanly) was a significant point of design discussion in Kubernetes, another one of the projects that adapted lessons learned inside Google on the outside (and had to change as a result).
All of the core services (kubelet, apiserver, etc) mostly expect to shutdown uncleanly, because as a project we needed to handle unclean shutdowns anyway (and could fix bugs when they happened).
But quite a bit of the software run by Kubernetes (both early and today) doesn’t always necessarily behave that way - most notably Postgres in containers in the early days of Docker behaved badly when KILLed (where Linux terminates the process without it having a chance to react).
So faced with the expectation that Kubernetes would run a wide range of software where a Google-specific principle didn’t hold and couldn’t be enforced, Kubernetes always (modulo bugs or helpful contributors regressing under tested code paths) sends TERM, waits a few seconds, then KILLs.
And the lack of graceful Go http server shutdown (as well as it being hard to do correctly in big complex servers) for many years also made Kube apiservers harder to run in a highly available fashion for most deployers. If you don’t fully control the load balancing infrastructure in front of every server like Google does (because every large company already has a general load balancer approach built from Apache or nginx or haproxy for F5 or Cisco or …), or enforce that all clients handle all errors gracefully, you tend to prefer draining servers via code vs letting those errors escape to users. We ended up having to retrofit graceful shutdown to most of Kube’s server software after the fact, which was more effort than doing it from the beginning.
In a very real sense, Google’s economy of software scale is that it can enforce and drive consistent tradeoffs and principles across multiple projects where making a tradeoff saves effort in multiple domains. That is similar to the design principles in a programming language ecosystem like Go or orchestrator like Kubernetes, but is more extensive.
But those principles are inevitably under communicated to users (because who reads the docs before picking a programming language to implement a new project in?) and under enforced by projects (“you must be this tall to operate your own Kubernetes cluster”).
John Ousterhout had famously written that threads were bad, and many people agreed with him because they seemed to be very hard to use.
Google software avoided them almost always, pretty much banning them outright, and the engineers doing the banning cited Ousterhout
Yeah this is simply not true. It was threads AND async in the C++ world (and C++ was of course most of the cycles)
The ONLY way to use all your cores is either threads or processes, and Google favored threads over processes (at least fork() based concurrency, which Burrows told people not to use).
For example, I'm 99.99% sure MapReduce workers started a bunch of threads to use the cores within a machine, not a bunch of processes. It's probably in the MapReduce paper.
So it can't be even a little true that threads were "avoided almost always"
---
What I will say is that pattern to say fan out requests to 50 or 200 servers and join in C++ was async. It wasn't idiomatic to use threads for that because of the cost, not because threads are "hard to use". (I learned that from hacking on Jeff Dean's tiny low latency gsearch code in 2006)
But even as early as 2009, people pushed back and used shitloads of threads, because ASYNC is hard to use -- it's a lot of manual state management.
e.g. From the paper about the incremental indexing system that launched in ~2009
Early in the implementation of Percolator, we decided to make all API calls blocking and rely on running THOUSANDS OF THREADS PER MACHINE to provide enough parallelism to maintain good CPU utilization. We chose this thread-per-request model mainly to make application code easier to write, compared to the event-driven model. Forcing users to bundle up their state each of the (many) times they fetched a data item from the table would have made application development much more difficult. Our experience with thread-per-request was, on the whole, positive: application code is simple, we achieve good utilization on many-core machines, and crash debugging is simplified by meaningful and complete stack traces. We encountered fewer race conditions in application code than we feared. The biggest drawbacks of the approach were scalability issues in the Linux kernel and Google infrastructure related to high thread counts. Our in-house kernel development team was able to deploy fixes to ad- dress the kernel issues
To say threads were "almost always avoided" is indeed ridiculous -- IIRC this was a few dedicated clusters of >20,000 machines running 2000-5000+ threads each ... (on I'd guess ~32 cores at the time)
I remember being in a meeting where the indexing VP mentioned the kernel patches mentioned above, which is why I thought of that paper
Also as you say there were threads all over the place in other areas too, GWS, MapReduce, etc.
I like that a lot of the comments are concerned with Go's failure to replace C++ as the de facto systems language yet Rob Pike doesn't mention anything about this in a retrospective article about what the language designers got wrong. He must be so embarrassed about it.
> Some dynamic languages (such as Python with type hints or TypeScript) are more strongly typed than C or even Java (with its type erasure).
The Python runtime does not care about type hints; TS types are erased when compiled down to JS. How does Java's type erasure make it less strongly typed than those two? You can compare the capability of the type systems, and Java's may well be lesser, but I don't really see how erasure matters here.
When the performance overhead of having a GC is less significant than the cognitive overhead of dealing with manual memory management (or the Rust borrow checker), Go was quite successful: Command line tools and network programs.
Around the time Go was released, it was certainly touted by its creators as a "systems programming language"[1] and a "replacement for C++"[2], but re-evaluating the Go team's claims, I think they didn't quite mean in the way most of us interpreted them.
1. The Go Team members were using "systems programming language" in a very wide sense, that include everything that is not scripting or web. I hate this defintion with passion, since it relies on nothing but pure elitism ("Systems language are languages that REAL programmers uses, unlike those "Scripting Languages"). Ironically, this usage seems to originate from John Ousterhout[3], who is himself famous for designing a scripting language (Tcl).
Ousterhout's definition of "system programming language" is: Designed to write applications from scratch (not just "glue code"), performant, strongly typed, designed for building data structures and algorithms from scratch, often provide higher-level facilities such as objects and threads.
Ousterhout's definition was outdated even back in 2009, when Go was released, let alone today. Some dynamic languages (such as Python with type hints or TypeScript) are more strongly typed than C or even Java (with its type erasure). Typing is optional, but so it is in Java (Object), and C (void*, casting). When we talk about the archetypical "strongly typed" language today we would refer to Haskell or Scala rather than C. Scripting languages like Python and JavaScript were already commonly used "for writing applications from scratch" back in 2009, and far from being ill-adapted for writing data structures and algorithms from scratch, Python became the most common language that universities are using for teaching data structures and algorithms! The most popular dynamic languages nowadays (Ruby, Python, JavaScript) all have objects, and 2 out of 3 (Python and Ruby) have threads (although GIL makes using threads problematic in the mainstream runtimes). The only real differentiator that remains is raw performance.
The widely accepted definition of a "systems language" today is "a language that can be used to systems software". Systems software are either operating systems or OS-adjacent software such as device drivers, debuggers, hypervisors or even complex beasts like a web browser. The closest software that Go can claim in this category is Docker, but Docker itself is just a complex wrapper around Linux kernel features such as namespaces and cgroups. The actual containerization is done by these features which are implemented in C.
During the first years of Go, the Go language team was confronted on golang-nuts by people who wanted to use go for writing systems software and they usually evaded directly answering these questions. When pressed, they would admit that Go is not ready for writing OS kernels, at least not now[4][5][6], but GC could be disabled if you want to[7] (of course, there isn't any way to free memory then, so it's kinda moot). Eventually, the team came to a conclusion that disabling GC is not meant for production use[8][9], but that was not apparent in the early days.
Eventually the references for "systems language" disappeared from Go's official homepage and one team member (Andrew Gerrand) even admitted this branding was a mistake[10].
In hindsight, I think the main "systems programming task" that Rob Pike and other members at the Go team envisioned was the main task that Google needed: writing highly concurrent server code.
2. The Go Team members sometimes mentioned replacing C and C++, but only in the context of specific pain points that made "programming in the large" cumbersome with C++: build speed, dependency management and different programmers using different subsets. I couldn't find any claim that go was meant as a general replacement for C and C++ anywhere from the Go Team, but the media and the wider programming community generally took Go as a replacement language for C and C++.
When you read through the lines, it becomes clear that the C++ replacement angle is more about Google than it is about Go. It seems that in 2009, Google was using C++ as the primary language for writing web servers. For the rest of the industry, Java was (and perhaps still is) the most popular language for this task, with some companies opting for dynamic languages like Python, PHP and Ruby where performance allowed.
Go was a great fit for high-concurrency servers, especially back in 2009. Dynamic languages were slower and lacked native support for concurrency (if you put aside Lua, which never got popular for server programming for other reasons). Some of these languages had threads, but these were unworkable due to GIL. The closest thing was frameworks Twisted, but they were fully asynchronous and quite hard to use.
Popular static languages like Java and C# were also inconvenient, but in a different way. Both of these languages were fully capable of writing high-performance servers, but they were not properly tuned for this use case by default. The common frameworks of the day (Spring, Java EE and ASP.net) introduced copious amounts of overhead, and the GC was optimized for high throughput, but it had very bad tail latency (GC pauses) and generally required large heap sizes to be efficient. Dependency management, build and deployment was also an afterthought. Java had Maven and Ivy and .Net had NuGet (in 2010) and MSBuild, but these where quite cumbersome to use. Deployment was quite messy, with different packaging methods (multiple JAR files with classpath, WAR files, EAR files) and making sure the runtime on the server is compatible with your application. Most enthusiasts and many startups just gave up on Java entirely.
The mass migration of dynamic language programmers to Go was surprising for the Go team, but in hindsight it's pretty obvious. They were concerned about performance, but didn't feel like they had a choice: Java was just too complex and Enterprisey for them, and eeking out performance out of Java was not an task easy either. Go, on the other hand, had the simplest deployment model (a single binary), no need for fine tuning and it had a lot of built-in tooling from day one ("gofmt", "godoc", "gotest", cross compilation) and other important tools ("govet", "goprof" and "goinstall" which was later broken into "go get" and "go install") were added within one year of its initial release.
The Go team did expect server programs to be the main use for Go and this is what they were targeting at Google. They just missed that the bulk of new servers outside of Google were being written in dynamic languages or Java.
The other "surprising use" of Go was for writing command-line utilities. I'm not sure if the original Go team were thinking about that, but it is also quite obvious in hindsight. Go was just so much easier to distribute than any alternative available at the time. Scripting languages like Python, Ruby or Perl had great libraries for writing CLI programs, but distributing your program along with its dependencies and making sure the runtime and dependencies match what you needed was practically impossible without essentially packaging your app for every single OS and distro out there or relying on the user to be a to install the correct version of Python or Ruby and then use gem or pip to install your package. Java and .NET had slow start times due to their VM, so they were horrible candidates even if you'd solve the dependency issues. So the best solution was usually C or C++ with either the "./configure && ./make install" pattern or making a static binary - both solutions were quite horrible. Go was a winner again: it produced fully static binaries by default and had easy-to-use cross compilation out of the box. Even creating a native package for Linux distros was a lot easier, so all you add to do is package a static binary.
[1]: https://opensource.googleblog.com/2009/11/hey-ho-lets-go.htm...
[2]: https://web.archive.org/web/20091114043422/http://www.golang...
[3]: https://users.ece.utexas.edu/~adnan/top/ousterhout-scripting...
[4]: https://groups.google.com/g/golang-nuts/c/6vvOzYyDkWQ/m/3T1D...
[5]: https://groups.google.com/g/golang-nuts/c/BO1vBge4L-o/m/lU1_...
[6]: https://groups.google.com/g/golang-nuts/c/UgbTmOXZ_yw/m/NH0j...
[7]: https://groups.google.com/g/golang-nuts/c/UgbTmOXZ_yw/m/M9r1...
[8]: https://groups.google.com/g/golang-nuts/c/qKB9h_pS1p8/m/1NlO...
[9]: https://github.com/golang/go/issues/13761#issuecomment-16772...
[10]: https://go.dev/talks/2011/Real_World_Go.pdf (Slide #25)