|[Home] [Credit Search] [Category Browser] [Staff Roll Call]||The LINUX.COM Article Archive|
|Originally Published: Wednesday, 7 March 2001||Author: Pekka Himanen|
|Published to: daily_feature/Linux.com Feature Story||Page: 1/1 - [Std View]|
The Academy and the Monastery
Once again Linux.com is proud to present the second installment from the popular new work, "The Hacker Ethic and the Spirit of the Information Age", by Pekka Himanen, Linus Torvalds and Manuel Castells. (Random House, January 2001, 288 pages). This week: "The Academy and the Monastery".
Dedicated to Eric Raymond - the introducer of the allegory of the cathedral and the bazaar
According to the hackers' "jargon file," the original hacker ethic meant the belief that "information-sharing is a powerful positive good." In practice this meant the ethical duty to work through an open development model, in which the hacker gives his or her creation freely for others to use, test, and develop further. Although for the author of this writing, the ethical arguments for the hacker model are the most interesting and important ones, there is also a more pragmatic level that is significant and fascinating. Just as we can add to our ethical arguments for the hackers' passionate and free way of working the more pragmatic point that, in the information age, new information is created most effectively by allowing for playfulness and for the possibility of working according to one's individual rhythm, we can likewise say that the open model is not just ethically justified but also very powerful in practice (in fact, the "jargon file" also says that it is a "powerful positive good"). It is worth of taking a closer look at the hackers' idea of openness from this purely pragmatic viewpoint. Thus all observations in this essay will be purely pragmatic; those who want to read more about the ethical arguments for the hacker model may turn to my book The Hacker Ethic and the Spirit of the Information Age with Linus Torvalds and Manuel Castells (Random House, 2001).
The development of the Net is an excellent concrete example of the hacker ethic in action, but the Linux project, which has arguably taken the ideal of openness the furthest so far, serves as an even better one. Torvalds started working on Linux in 1991 while he was a student at the University of Helsinki. After developing an interest in the problems of operating systems, Torvalds imported into his home computer the Unix-like Minix operating system, written by Dutch computer-science professor Andrew Tanenbaum. By studying and using it as a developmental framework, he proceeded to design his own operating system. An essential feature of Torvalds's work was that he involved others in his project from the very beginning. On August 25, 1991, he posted a message on the Net with the subject line "What would you like to see most in minix?" in which he announced that he was "doing a (free) operating system." He received several ideas in reply and even some promises for help in testing the program. The operating system's first version was released on the Net as source code free to all in September 1991.
The next, improved version was available as soon as early October. Torvalds then extended an even more direct invitation to hackers to join him in the development of the new system. In a message sent to the Net, he asked for tips about information sources. He got them, and development advanced quickly. Within a month, other programmers had joined in. Since then, the Linux network has grown at an amazing creative pace. Thousands of programmers have participated in Linux's development, and their numbers are growing steadily. There are millions of users, and their number, too, is growing. Anyone can participate in its development, and anyone is welcome to use it freely.
In order to control the continuous development of Linux, publications have been divided into two series. In the stable versions, safe for use by average users, the y in the release number x.y.z is even (e.g., version 1.0.0), whereas in the developmental versions, aimed at programmers, the y is the stable version's y + 1 (e.g., the stable 1.0.0's improved but still not finally tested developmental version is 1.1.0). x grows only when a truly fundamental change is made (at the time of writing, the latest available version is 2.4.0). This simple model has worked surprisingly well in the management of Linux development.
In his well-known essay "The Cathedral and the Bazaar," published originally on the Net, Eric Raymond has defined the difference between Linux's open model, and the closed model preferred by most companies, by comparing them to the bazaar and the cathedral. Although a technologist himself, Raymond emphasizes that Linux's real innovation was not technical but social: it was the new, completely open social manner in which it was developed. In his vocabulary, it was the shift from the cathedral to the bazaar.
Raymond defines the cathedral as a model in which one person or a very small group of people plans everything in advance and then realizes the plan under its own power. Development occurs behind closed doors, and everybody else will see only the "finished" results. In the bazaar model, on the other hand, ideation is open to everyone, and ideas are handed out to be tested by others from the very beginning. The multiplicity of viewpoints is important: when ideas are disseminated widely in an early stage, they can still benefit from external additions and criticisms by others, whereas when a cathedral is presented in its finished form, its foundations can no longer be changed. In the bazaar, people try out different approaches, and, when someone has a brilliant idea, the others adopt it and build upon it.
Generally speaking, this open-source model can be described as follows: It all begins with a problem or goal someone finds personally significant. That person may release just the problem or goal itself, but usually he or she will also provide a Solution-version 0.1.1, to use the Linux numbering system. In the open model, a recipient has the right to freely use, test, and develop this Solution. This is possible only if the information that has led to the Solution (the source) has been passed on with it. In the open-source model, the release of these rights entails two obligations: these same rights have to be passed on when the original Solution or its refined version (0.1.2) is shared, and the contributors must always be credited whenever either version is shared. All this is a shared process, in which the participants move gradually-or sometimes even by leaps and bounds (say, a shift from version 0.y.z to version 1.y.z)-to better versions. In practice, of course, projects follow this idealized model to a greater or lesser extent.
Although Raymond's allegory of the bazaar and the cathedral elegantly captures the difference between the open-source and closed-source models, I would like to explain the power of the open model vis-a-vis the closed model further by suggesting another pair of allegories: the academy and the monastery. In fact, the open-source model resembles the academy even more directly than the bazaar. Scientists, too, release their work openly to others for their use, testing, and further development. Their research is based on the idea of an open and self-correcting process. The sociologist Robert Merton wrote that this idea of self-correction was as important a principle to science as openness. He called it organized skepticism-historically, it is a continuation of the synusia of Plato's Academy, which also included the idea of approaching the truth through critical dialogue. The scientific ethic entails a model in which theories are developed collectively and their flaws are perceived and gradually removed by means of criticism provided by the entire scientific community.
Of course, scientists have chosen this model not only for ethical reasons but also because it has proved to be the most successful way of creating scientific knowledge. All of our understanding of nature is based on this academic or scientific model. The reason why the hackers' open-source model works so effectively seems to be-in addition to the facts that they are realizing their passions and are motivated by peer recognition, as scientists are, too-that to a great degree it conforms to the ideal open academic model, which is historically the best adapted for information creation.
Broadly speaking, one can say that in the academic model the point of departure also tends to be a problem or goal researchers find personally interesting; they then provide their own Solution (even though in many instances the mere statement of the problem or proclamation of a program is interesting in itself). The academic ethic demands that anyone may use, criticize, and develop this Solution. More important than any final result is the underlying information or chain of arguments that has produced the Solution. (It is not enough to merely publish "E = mc2"-theoretical and empirical justifications are also required.) Nevertheless, the scientific ethic does not involve only rights; it also has the same two fundamental obligations: the sources must always be mentioned (plagiarism is abhorrent), and the new Solution must not be kept secret but must be published again for the benefit of the scientific community. The fulfillment of these two obligations is not required by law but by the scientific community's internal, powerful sanctions.
Following this model, normal physics research, for example, continuously provides new additions ("developmental versions") to what has already been achieved, and after testing these refinements the scientific community accepts them as part of its body of knowledge ("stable versions"). Much more rarely, there is an entire "paradigm shift," to use the expression that philosopher of science Thomas Kuhn introduced in his book The Structure of Scientific Revolutions. In the broadest sense, there have been only three long-lived research paradigms: the Aristotelian-Ptolemaic physics, the "classic" Newtonian physics, and the Einsteinian-Heisenbergian physics based on the theory of relativity and quantum mechanics. Seen this way, present theories are versions 3.y.z. (Many physicists already call the version 4, which they believe is imminent, "The Theory of Everything." Computer hackers would not anticipate the arrival of version 4.0.0 quite so eagerly.)
The opposite of this hacker and academic open model can be called the closed model, which does not just close off information but is also authoritarian. In a business enterprise built on the monastery model, authority sets the goal and chooses a closed group of people to implement it. After the group has completed its own testing, others will have to accept the result purely as it is. Other uses of it are called "unauthorized uses." We can use our allegory of the monastery as an apt metaphor for this style, which is well summed up by Saint Basil the Great's monastic rule from the fourth century: "No one is to concern himself with the superior's method of administration or make curious inquiries about what is being done." The closed model does not allow for initiative or criticism that would enable an activity to become more creative and self-corrective.
It is true that many hackers oppose hierarchical operation for ethical reasons, like that it easily leads to a culture in which people are humiliated. But they also think that the nonhierarchical manner is the most effective one. From the point of view of a traditionally structured business, this may initially seem quite senseless. How could it ever work? Should not someone draw an organization chart for the Net and Linux developers? It is interesting to note that similar things might be said of science. How could Einstein ever arrive at his E = mc2 in the chaos of self-organized groups of researchers? Should science not operate with a clear-cut hierarchy, headed up by a CEO of Science, with a division chief for every discipline?
Both scientists and hackers have learned from experience that the lack of strong structures is one of the reasons why their model is so powerful. Hackers and scientists can just begin to realize their passions, and then network with other individuals who share them. This spirit clearly differs from that found not only in business but also in government. In governmental agencies, the idea of authority permeates an action even more strongly than it does in companies. For the hackers, the typical governmental way of having endless meetings, forming countless committees, drafting tedious strategy papers, and so on before anything happens is at least as great a pain as doing market research to justify an idea before you can start to create. (It also irritates scientists and hackers no end when the university has been turned into a governmental bureaucracy or monastery.)
But the relative lack of structures does not mean that there are no structures. Despite its appearance, hackerism does not exist in a state of anarchy or mean paradisiacal utopianism any more than science does. Hacker and scientific projects have their relative guiding figures, such as Torvalds, whose task it is to help in determining direction and to support the creativity of others. In addition, both the academic and hacker models have a special publication structure. Research is open to anyone, but in practice contributions included in reputable scientific publications are selected by a smaller group of referees. Still, this model is designed so as to guarantee that in the long run, it is the truth that determines the referee group rather than the other way around. Like the academic referee group, the hacker network's referee group retains its position only as long as its choices correspond to the considered choices of the entire peer community. If the referee group is unable to do this, the community bypasses it and creates new channels. This means that at the bottom the authority status is open to anyone and is based only on achievement-no one can achieve permanent tenure. No one can assume a position in which his or her work could not be reviewed by peers, just as anyone else's creations can be.
It goes without saying that the academy was very influential long before there were computer hackers. For example, from the nineteenth century onward, every industrial technology (electricity, telephone, television, etc.) would have been unthinkable without its underpinning of scientific theory. The late industrial revolution already marked a transition to a society that relied upon scientific results; the hackers bring about a reminder that, in the information age, even more important than discrete scientific results is the open academic model that enables the creation of these results.
This is a central insight. In fact, it is so important that the second big reason for the pragmatic success of the hacker model seems to be the fact that hackers' learning is modeled to a large extent the same way as their development of new software (which can actually be seen as the frontier of their collective learning). Thus, their learning model has the same strengths as the development model.
A typical hacker's learning process starts out with setting up an interesting problem, working toward a solution by using various sources, then submitting the solution to extensive testing. Learning more about a subject becomes the hacker's passion. Torvalds initially taught himself programming on a computer he inherited from his grandfather. He set up problems for himself and found out what he needed to know to solve them. Most hackers have learned programming in a similar informal way, following their passions. The examples of the ability of ten-year-olds to learn very complicated programming issues tell us much about the importance of passion in the learning process, as opposed to the slow going their contemporaries often find their education in traditional schools to be.
Later on, the beginnings of Torvalds's operating system arose out of his explorations into the processor of the PC he purchased in 1991. In typical hacker fashion, simple experiments with a program that tested the features of the processor by writing out either As or Bs gradually expanded into a plan for a Net newsgroup-reading program and then on to the ambitious idea of an entire operating system. But even though Torvalds is a self-taught programmer in the sense that he acquired his basic knowledge without taking a class, he did not learn everything all by himself. For example, in order to familiarize himself with operating systems, he studied the source codes of Tanenbaum's Minix as well as various other information sources provided by the hacker community. From the very beginning, in true hacker fashion, he has never hesitated to ask for help with questions in areas in which he has not yet acquired expertise.
A prime strength of the hacker learning model lies in the fact that typically a hacker's learning teaches others. When a hacker studies the source code of a program, he often develops it further, and others can learn from this work. When a hacker checks out information sources maintained on the Net, he often adds helpful information from his own experience. An ongoing, critical, evolutionary discussion forms around various problems. The reward for participating in this discussion is peer recognition.
The hackers' open learning model can be called their "Net Academy." It is a continuously evolving learning environment created by the learners themselves. The learning model adopted by hackers has many advantages. In the hacker world, the teachers or assemblers of information sources are often those who have just learned something. This is beneficial because often someone just engaged in the study of a subject is better able to teach it to others than the expert who no longer comes to it fresh and has, in a way, already lost his grasp of how neophytes think. For an expert, empathizing with someone who is just learning something involves levels of simplification that he or she often resists for personal intellectual reasons. Nor does the expert necessarily find the teaching of basics very satisfying, while a student may find doing such teaching tremendously rewarding, since he or she does not as a rule get to enjoy the position of instructor and is generally not given sufficient opportunity to use his or her talents. The process of teaching also involves by its very nature the comprehensive analysis of subject matter. If one is really able to teach something to others, one must have already made the material very clear to oneself. While preparing the material, one has to consider it carefully from the point of view of possible further questions and counterarguments.
Once again, this hacker model resembles Plato's Academy, where students were not regarded as targets for knowledge transmission but were referred to as companions in learning (synetheis). In the Academy's view, the central task of teaching was to strengthen the learners' ability to pose problems, develop lines of thought, and present criticism. As a result, the teacher was metaphorically referred to as a midwife, a matchmaker, and a master of ceremonies at banquets. It was not the teacher's task to inculcate the students with preestablished knowledge but to help them give birth to things from their own starting points.
In the hacker community, too, we can think of the best experts as the community's gadflies, midwives, and symposiarchs. They are able to entice the collective learning process, which links the learning and development models together as a powerful hacker way of creating things.
From The Hacker Ethic and the Spirit of the Information Age by Pekka Himanen with Linus Torvalds and Manuel Castells (Random House, 2001). For more, see www.hackerethic.org. To buy the book from Amazon Barnes and Noble or Powells , click the name of your preferred bookstore. This writing can be published freely on the web with this information included.
Pekka Himanen, b. 1973, did his Ph.D. dissertation in philosophy at the age of 20 at the University of Helsinki. Currently he is doing research with Manuel Castells at the University of California at Berkeley. He has been a member of the President of Finland's small advisory group on the network society issues as well as the group that prepared the Finnish governments network society strategy. Himanen is also a well-known media intellectual in Finland: he has had his own TV series on the philosophy of technology, and there has even been a play written about him. Himanen lives in Berkeley.