Few debates in the computing industry have been as long-running and fierce as “is 'open source' better than 'closed' when it comes to software development?”
The debate has been reignited by disagreements over how companies like Google, Meta, OpenAI and Microsoft should compete for dominance in artificial intelligence systems, with some opting for a closed model and others favoring an open approach.
Here's what you need to know:
What does open source software mean?
Source code is the fundamental building block of the app you use. A developer can write tens of thousands of lines of source code to create a program that runs on a computer.
Open source software is computer code that is free to distribute, copy, or modify for the developer's own purposes. The nonprofit Open Source Initiative, an industry group, sets other rules and standards for software to be considered open source, but for the most part it comes down to whether the code is freely and publicly available for anyone to use and improve on.
What are some examples of open source software?
Some of the most well-known software systems are open source, such as Linux, the operating system on which Google's Android mobile system is built. A well-known open source product is Firefox, a freely downloadable web browser created by the Mozilla Foundation.
So what is the open vs. closed debate, and how does it relate to artificial intelligence?
Technology companies like Google, OpenAI, and Anthropic have spent billions of dollars developing “closed” or proprietary AI systems whose underlying source code cannot be seen or modified by anyone not employed by these companies, including their paying customers.
For a long time, this wasn't the norm: Most of these companies open-sourced their AI research so other technologists could study and improve it. But as tech executives began to realize that the pursuit of more advanced AI systems was worth billions of dollars, they began to isolate their research.
Technology companies argue that this is for the good of humanity because these systems are so powerful that in the wrong hands they could cause catastrophic societal damage. Critics say the companies simply want to protect their technology from enthusiasts and competitors.
Meta took a different approach: Meta CEO Mark Zuckerberg decided to open-source the company's large-scale language model, a program that learns skills by analyzing vast amounts of digital text from the internet. Zuckerberg's decision to open-source Meta's model, LLaMA, meant that any developer could download it and build their own chatbots and other services.
In a recent podcast interview, Zuckerberg said no single organization should have “really superintelligence capabilities that aren't widely shared.”
Is it better to have an open or closed system?
That depends on who you ask.
For many technologists and adherents of hardcore hacker culture, open source is the way to go: world-changing software tools should be distributed for free so that anyone can use them to build interesting and inspiring technology, they say.
Some say AI is advancing so quickly that system makers should keep a tight rein on it to prevent it from being misused, and that developing such systems takes so much time and money that closed models should come at a cost.
The debate has already spread beyond Silicon Valley and computer enthusiasts: Lawmakers in the European Union and Washington are meeting to work on a framework for regulating AI, including the risks and benefits of open-source AI models.