When it comes to free software, there’s a sharp cultural divide between the tech world and the gaming world. It’s normal for a startup to make heavy use of open source, and it’s normal for a big tech company to give away huge codebases for free. This is because the tech world understands the benefits of using open source including greater software quality, security, and interoperability — as well as the benefits of producing open source, such as the ability to recruit developers to build on your platform.
Not so in gaming, where there is very little sharing of code between companies. And if it is shared, it is “generally licensed” rather than made open source.
Compare web developers to indie game developers (devs): If you want to make a website, you can choose from a multitude of open source frameworks, each which has a wide range of free and well-documented add-ons and modifications. If you want to make a video game, however, you have to buy a license to one of the major game engines (Unity or Unreal), and if you want to use, say a script for lighting, you have to buy it through a DRM-protected asset store. It’s not a very developer-friendly experience, which is one reason why many indie game devs choose to make game mods (modifications) instead. Even when it comes to something at the bottom of the stack like low-level computer graphics APIs, the proprietary library DirectX is winning against open-source library GL. The most popular graphics card manufacturer, NVIDIA, makes proprietary drivers that have to be reverse engineered by the open source community.
Why is it that open source never took hold in gaming? It’s probably not because of a lack of volunteers; many developers enjoy working on game software more than anything else. And the advantages of permissive licensing for the industry as a whole are huge.
So what happened? It’s important to remember that widespread use of open source did not happen by accident. Before the free software and open source movements, the general tech industry used to behave much like the game industry does now. Proprietary licenses were common, and open source software looked like kooky idealism rather than a smart business decision. It took a coordinated movement, winning over first enthusiastic developers, and then the companies where they worked, for the industry as a whole to go open source. For any single company, there was a Prisoner’s Dilemma — going open source alone would put itself at a competitive disadvantage, despite the collective benefits.
A major way open source won was by convincing programmers, which incentivized tech companies to adopt open source as part of their strategy to attract talent. But a similar push for open source didn’t succeed in gaming. Perhaps because game developers are a fairly separate community from the rest of tech. A more likely explanation is that programmers in startups are free to choose the tools they like the best, whereas in game studios, the opinion of artists, animators, and designers counts for just as much — which makes it much harder for them to switch toolchains.
After many years of incubation in the game world, though, computer graphics are now gaining newfound importance. Video games created a market for GPUs and graphics software, which have now developed to the point where the technology has applications beyond gaming. The most obvious example is in VR/AR, which has the potential to be a successor to mobile as a major general purpose computing platform. Computer graphics is also finding new applications in AI, in building simulators to train models for autonomy and smart objects.
Because computer graphics is now the foundation for a number of important platforms, it’s now necessary for graphics programmers to share their code. The first reason is simple functionality, as the fewer layers in the stack that are open source, the more of a problem you have with bugs — because “given enough eyeballs, all bugs are shallow” [Linus’ Law, formulated by Eric Raymond in The Cathedral and The Bazaar]. Closed-source game engines have as much of a problem with bugs as Windows did. The second reason graphics programmers now need to share their code is that open source erodes barriers to entry, which encourages much-needed experimentation.
The web has always been a hotbed of DIY creativity, and it wouldn’t have been possible without open standards and free software. So for virtual reality and augmented reality to really flourish, graphics programming needs to escape the studio model. Game engines are like the operating systems of VR, and without an open source foundation, there’s a danger that the software will be too buggy to do the hardware justice — like the personal computer during the era of Microsoft.
More importantly, without open source, it is harder for developers to contribute. The use cases of virtual reality are not well-understood, and if VR experiences can only be created by well-funded teams, we will miss out on the talents of indie developers, whose imagination will help the technology realize its full potential.