|[Home] [Credit Search] [Category Browser] [Staff Roll Call]||The LINUX.COM Article Archive|
|Originally Published: Thursday, 14 October 1999||Author: Jim Voorhees|
|Published to: corp_features/General||Page: 1/1 - [Printable]|
Business, Open Source, and the Search for Perfection
We worked on it for months. Several people went through it line by line. "The president of the organization opened a copy of this product, proudly, expectantly. We, too, the editors of this printed report, were proud of our work. But there it was. The error, the typo, the bug...." Jim Voorhees explores the effects of flaws like these and explains how and why the Open Source model is beneficial to product improvement.
|Page 1 of 1|
We worked on it for months. Several people went through it line by line.They searched for errors, found them, fixed them. Finally, thousands of copies of our product were delivered, ready to be shipped around the world. The president of the organization opened a copy of this product, proudly, expectantly. We, too, the editors of this printed report, were proud of our work.
But there it was. The error, the typo, the bug. It sat openly, obviously, on the first page he came to. There stood we, masters of the printed word, red faced with embarrassment. Some agonizing followed, but the decision was made to ship the report as it was, complete with bug.
This was a hard copy publication, but the truth is that any product of the human imagination will have its flaws, its bugs. Recognizing how distant perfection is, the weaver of a Persian rug purposely creates a flaw so that God will know him to be properly modest. In all these things, publications, music, art, science, and Persian rugs, perfection will always be sought and never achieved. They are the convoluted products of imperfect minds, and the imperfections inevitably find their way into print, code, and whatever else we imperfect beings produce. Software is one of these things, of course, but the bugs that appear will often have consequences more severe than the embarrassment that greets the proofreader. Moreover, bug-creating programmers are not often praised for their modesty.
Indeed, the cost of a software bug is often more than the cost of an error in a published work, which is usually simply the embarrassment suffered by authors, editors, and proofreaders. It can mean productivity lost as users find a way to fix or work around it. It can increase the expenses of the users or firm who bought the faulty product in the first case and must buy the fix or even an entirely new product as a replacement. The software producer itself can suffer as its reputation for quality falls and it bears the cost of fixing errors that should not have been shipped. Microsoft shipped an error-filled Word 3.0 for the Macintosh in 1987 and then had to spend $1 million to replace the buggy version. This example is instructive, but not unique. All this, of course, ignores the possible but still unknown results of the Y2K bug, which range from the inconvenience of darkened houses to the horrors of a nuclear meltdown.
Yet the chances of bugs appearing in software has increased as software has grown bigger. And it has grown enormously. Bill Gates' first operating system took up a mere 4,000 bytes; Windows 98 requires a minimum of 120 MB to install. Windows 2000 will be even bigger. And the applications that run on top of these operating systems can be bigger still. The days when I could shove a single-sided floppy disk with OS, application, and files into my Macintosh are long gone. The implications of this for how software is made are not always clearly recognized. The opportunities for error increase exponentially with the size of the program. Indeed, anyone working the industry knows that the question is not whether bugs will appear, but whether they will be severe enough to require fixing. Indeed, software is deemed shippable not when no bugs are left, but when the only bugs remaining are regarded to be ones that the customer will tolerate.
Knowledge of this is so common as to be trite, but consumers--users--often do not know it. They are used to products that come closer to perfection. Like my Toyota, which may well last for years with no trips to the shop except for normal maintenance. Or my friend Greg's rotary telephone, still working flawlessly after decades over a circuit-switched system that lets me reach him (or his answering machine) more than 99 percent of the time, day or night. Indeed, as data telecommunications are melded into the world of circuit-switched voice communications, the software on which they run will have to come closer to the standards for reliability the latter have met as a matter of course for decades.
The need for software to undergo strict testing to raise its quality is clearly recognized by those who create software. And they are changing the ways they approach it. Microsoft, for one, has steadily increased the number of people involved in its beta testing programs. According to Michael Cusumano and Richard Selby's Microsoft Secrets, the number of people who tested Microsoft's operating systems rose from 7,000 for MS-DOS 6.0 to 75,000 for Windows NT 3.0 to a planned 400,000 for Windows 95. The trend has continuing. Anyone who wants to can become a tester for Windows 2000; the same was true of the recently released Office 2000 suite.
This has prevented a recurrence of the disaster of Word 3.0. It is also an implicit recognition that the software industry is changing-slowly, but radically and surely. As argued here, the growing size of software makes it both easier to create bugs and harder to find and fix enough of them to make software shippable. In addition, the growth of networking and the Internet, and the confluence of data and voice telecommunications-is making it ever more imperative that different programs work together, seamlessly.
These trends increase the need for developers to understand the work of their colleagues well enough to create device drivers and other programs that work on the interfaces between applications. They must even work on different kinds of hardware-not just computers with different CPUs, but different devices. Proprietary software producers must find a balance between how much code to release so that other developers can create such software and how much to keep to themselves. The need for that balance has always been there, but it is changing in favor of open code.
The need for extensive scrutiny and, of course, open code, is at the heart of the open source model for producing software. The balance that proprietary software firms must agonize to find is automatic for the Open Source software producer. The extensive testing that Microsoft and other proprietary producers are finding ever more necessary is constant in Open Source as long as the software is used.
Microsoft and other proponents of a closed source model also choose to forgo two advantages available to open source producers. First, their beta testers can only identify a problem, they cannot pinpoint its cause. But those who examine open source software can look at the code and say, "Ahhh, there's the rub!" Second, open source producers have a flexibility that proprietary producers do not. The speed with which Linux is improving to meet the demands of a growing market, particularly in the face of challenges from Microsoft and others, shows this. In addition, open source software can be improved by anyone, anytime. Any systems administrator or developer with an itch can scratch it. You don't like a feature? Change it. You found a bug? Fix it. You don't have to wait for the producer to produce another version or package some changes.
None of this gets us to perfection. There will long be bugs in our future, no matter what software we use, but with open sources we may get a step closer to being bug-free even as the world of software changes.
Comments? Email the author of this piece.
|Page 1 of 1|