Open Source Software
A (New?) Development Methodology
{ The body of the Halloween Document is an internal strategy memorandum on Microsoft's possible responses to the Linux/Open Source phenomenon.(This annotated version has been renamed ``Halloween I''; there's a sequel, ``Halloween II'', which marks up a second memo more specifically addressing Linux.)
Microsoft has publicly acknowledged that this memorandum is authentic, but dismissed it as a mere engineering study that does not define Microsoft policy.
However, the list of collaborators mentioned at the end includes some people who are known to be key players at Microsoft, and the document reads as though the research effort had the cooperation of top management; it may even have been commissioned as a policy white paper for Bill Gates's attention (the author seems to have expected that Gates would read it).
Either way, it provides us with a very valuable look past Microsoft's dismissive marketing spin about Open Source at what the company is actually thinking -- which, as you'll see, is an odd combination of astuteness and institutional myopia.
Despite some speculation that this was an intentional leak, this seems quite unlikely. The document is too damning; portions could be considered evidence of anti-competitive practices for the DOJ lawsuit. Also, the author ``refused to confirm or deny'' when initially contacted, suggesting that Microsoft didn't have its story worked out in advance.
Since the author quoted my analyses of open-source community dynamics (The Cathedral and the Bazaar and Homesteading the Noosphere) extensively, it seems fair that I should respond on behalf of the community. :-)
Key Quotes:
Here are some notable quotes from the document, with hotlinks to where they are embedded. It's helpful to know that ``OSS'' is the author's abbreviation for ``Open Source Software''. FUD, a characteristic Microsoft tactic, is explained here.
* OSS poses a direct, short-term revenue and platform threat to Microsoft, particularly in server space. Additionally, the intrinsic parallelism and free idea exchange in OSS has benefits that are not replicable with our current licensing model and therefore present a long term developer mindshare threat.
* Recent case studies (the Internet) provide very dramatic evidence ... that commercial quality can be achieved / exceeded by OSS projects.
* ...to understand how to compete against OSS, we must target a process rather than a company.
* OSS is long-term credible ... FUD tactics can not be used to combat it.
* Linux and other OSS advocates are making a progressively more credible argument that OSS software is at least as robust -- if not more -- than commercial alternatives. The Internet provides an ideal, high-visibility showcase for the OSS world.
* Linux has been deployed in mission critical, commercial environments with an excellent pool of public testimonials. ... Linux outperforms many other UNIXes ... Linux is on track to eventually own the x86 UNIX market ...
* Linux can win as long as services / protocols are commodities.
* OSS projects have been able to gain a foothold in many server applications because of the wide utility of highly commoditized, simple protocols. By extending these protocols and developing new protocols, we can deny OSS projects entry into the market.
* The ability of the OSS process to collect and harness the collective IQ of thousands of individuals across the Internet is simply amazing. More importantly, OSS evangelization scales with the size of the Internet much faster than our own evangelization efforts appear to scale.How To Read This Document:
Comments in green, surrounded by curly brackets, are me (Eric S. Raymond). I have highlighted what I believe to be key points in the original text by turning them red. I have inserted comments near these key points; you can skim the document by surfing through this comment index in sequence.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28
I've embedded a few other comments in green that aren't associated with key points and aren't indexed. These additional comments are only of interest if you're reading the entire document.
I have otherwise left the document completely as-is (not even correcting typos), so you can read what Bill Gates is reading about Open Source. It's a bit long, but persevere. An accurate fix on the opposition's thinking is worth some effort -- and there are one or two really startling insights buried in the corporatespeak.
Threat Assessment:
I believe that far and away the the most dangerous tactic advocated in this memorandum is that embodied in the sinister phrase ``de-commoditize protocols''.
If publication of this document does nothing else, I hope it will alert everyone to the stifling of competition, the erosion of consumer choice, the higher costs, and the monopoly lock-in that this tactic implies.
The parallel with Microsoft's attempted hijacking of Java, and its attempts to spoil the ``write once, run anywhere'' potential of this technology, should be obvious.
I have included an extended discussion of this point in my interlinear comments. To prevent this tactic from working, I believe open-source advocates must begin emphasizing these points:
The first (1.1) annotated version of the VinodV memorandum was prepared over the weekend of 31 Oct-1 Nov 1998. It is in recognition of the date, and my fond hope that publishing it will help realize Microsoft's worst nightmares, that I named it the ``Halloween Document"'.
The 1.2 version featured cleanup of non-ASCII characters.
The 1.3 version noted Microsoft's acknowledgement of authenticity.
The 1.4 version added a bit more analysis and the section on Threat Assessment.
The 1.5 version added some bits to the preamble.
The 1.6 version added more to one of the comments.
The 1.7 version added the reference to the Fuzz papers.
The 1.8 version added a link to the Halloween II document.
The 1.9 version adds a note about HTTP-DAV support.
The 1.10 version adds more on the ``who do you sue?'' question.
The 1.11 version adds perceptive comments from the Learning From Linux, page by Tom Nadeau an OS/2 advocate. The 1.12 adds illuminating comments by a former Microserf who wishes to remain nameless. }
Vinod Valloppillil (VinodV)
Aug 11, 1998 -- v1.00
Microsoft Confidential
Table of Contents Table of Contents *Executive Summary
*Open Source Software
*What is it?
*Software Licensing Taxonomy
*Open Source Software is Significant to Microsoft
*History
*Open Source Process
*Open Source Development Teams
*OSS Development Coordination
*Parallel Development
*Parallel Debugging
*Conflict resolution
*Motivation
*Code Forking
*Open Source Strengths
*OSS Exponential Attributes
*Long-term credibility
*Parallel Debugging
*Parallel Development
*OSS = `perfect' API evangelization / documentation
*Release rate
*Open Source Weaknesses
*Management Costs
*Process Issues
*Organizational Credibility
*Open Source Business Models
*Secondary Services
*Loss Leader -- Market Entry
*Commoditizing Downstream Suppliers
*First Mover -- Build Now, $$ Later
*Linux
*What is it?
*Linux is a real, credible OS + Development process
*Linux is a short/medium-term threat in servers
*Linux is unlikely to be a threat on the desktop
*Beating Linux
*Netscape
*Organization & LIcensing
*Strengths
*Weaknesses
*Predictions
*Apache
*History
*Organization
*Strengths
*Weaknesses
*IBM & Apache
*Other OSS Projects
*Microsoft Response
*Product Vulnerabilities
*Capturing OSS benefits -- Developer Mindshare
*Capturing OSS benefits -- Microsoft Internal Processes
*Extending OSS benefits -- Service Infrastructure
*Blunting OSS attacks
*Other Interesting Links
*Acknowledgments
*Revision History
*
Open Source Software
A (New?) Development Methodology
Executive SummaryOpen Source Software (OSS) is a development process which promotes rapid creation and deployment of incremental features and bug fixes in an existing code / knowledge base. In recent years, corresponding to the growth of Internet, OSS projects have acquired the depth & complexity traditionally associated with commercial projects such as Operating Systems and mission critical servers.
{ OK, this establishes that Microsoft isn't asleep at the switch. TN explains the connection to Java as follows:
However, other OSS process weaknesses provide an avenue for Microsoft to garner advantage in key feature areas such as architectural improvements (e.g. storage+), integration (e.g. schemas), ease-of-use, and organizational support.
{ This summary recommendation is mainly interesting for how it fails to cover the specific suggestions later on in the document about de-commoditizing protocols etc. I'm told by a former Microserf that the references to "Storage+" here and in the executive summary are much more significant than thet seem. MS's plan for the next few years is to move to an integrated file/data/storage system based upon Exchange, completely replacing the current FAT and NTFS file systems. They are absolutely planning on one monolithic structure, called "megaserver", as their next strategic infrastructure. The lock-in effect of this would be immense if they succeed. } Open Source Software What is it?Open Source Software (OSS) is software in which both source and binaries are distributed or accessible for a given product, usually for free. OSS is often mistaken for "shareware" or "freeware" but there are significant differences between these licensing models and the process around each product.
Software Licensing Taxonomy
Software Type |
|||||||
Commercial |
|||||||
Trial Software |
X (Non-full featured) |
X |
|||||
Non-Commercial Use |
X (Usage dependent) |
X |
|||||
Shareware |
X -(Unenforced licensing) |
X |
|||||
Royalty-free binaries ("Freeware") |
X |
X |
X |
||||
Royalty-free libraries |
X |
X |
X |
X |
|||
Open Source (BSD-Style) |
X |
X |
X |
X |
X |
||
Open Source (Apache Style) |
X |
X |
X |
X |
X |
X |
|
Open Source (Linux/GNU style) |
X |
X |
X |
X |
X |
X |
X |
License Feature |
Zero Price Avenue |
Redistributable |
Unlimited Usage |
Source Code Available |
Source Code Modifiable |
Public "Check-ins" to core codebase |
All derivatives must be free |
The broad categories of licensing include:
Commercial software is classic Microsoft bread-and-butter. It must be purchased, may NOT be redistributed, and is typically only available as binaries to end users.
Limited trial software are usually functionally limited versions of commercial software which are freely distributed and intend to drive purchase of the commercial code. Examples include 60-day time bombed evaluation products.
Shareware products are fully functional and freely redistributable but have a license that mandates eventual purchase by both individuals and corporations. Many internet utilities (like "WinZip") take advantage of shareware as a distribution method.
Non-commercial use software is freely available and redistributable by non-profit making entities. Corporations, etc. must purchase the product. An example of this would be Netscape Navigator.
Royalty-free binaries consist of software which may be freely used and distributed in binary form only. Internet Explorer and NetMeeting binaries fit this model.
Royalty-free libraries are software products whose binaries and source code are freely used and distributed but may NOT be modified by the end customer without violating the license. Examples of this include class libraries, header files, etc.
A small, closed team of developers develops BSD-style open source products & allows free use and redistribution of binaries and code. While users are allowed to modify the code, the development team does NOT typically take "check-ins" from the public.
Apache takes the BSD-style open source model and extends it by allowing check-ins to the core codebase by external parties.
CopyLeft or GPL (General Public License) based software takes the Open Source license one critical step farther. Whereas BSD and Apache style software permits users to "fork" the codebase and apply their own license terms to their modified code (e.g. make it commercial), the GPL license requires that all derivative works in turn must also be GPL code. "You are free to hack this code as long as your derivative is also hackable"
To us, open-source licensing and the rights it grants to users and third parties are primary, and specific development practice varies ad-hoc in a way not especially coupled to our license variations. In this Microsoft taxonomy, on the other hand, the central distinction is who has write access to a privileged central code base.
This reflects a much more centralized view of reality, and reflects a failure of imagination or understanding on the memo-authors's part. He doesn't grok our distributed-development tradition fully. This is hardly surprising... }
Open Source Software is Significant to Microsoft
This paper focuses on Open Source Software (OSS). OSS is acutely different from the other forms of licensing (in particular "shareware") in two very important respects:
OSS is a concern to Microsoft for several reasons:
A key barrier to entry for OSS in many customer environments has been its perceived lack of quality. OSS advocates contend that the greater code inspection & debugging in OSS software results in higher quality code than commercial software.
Recent case studies (the Internet) provide very dramatic evidence in customer's eyes that commercial quality can be achieved / exceeded by OSS projects. At this time, however there is no strong evidence of OSS code quality aside from anecdotal.
{ These sentences, taken together, are rather contradictory unless the ``recent case studies'' are all ``anecdotal''. But if so, why call them ``very dramatic evidence''?It appears there's a bit of self-protective backing and filling going on in the second sentence. Nevertheless, the first sentence is a huge concession for Microsoft to make (even internally).
In any case, the `anecdotal' claim is false. See Fuzz Revisited: A Re-examination of the Reliability of UNIX Utilities and Services .
Here are three pertinent lines from this paper:
"The failure rate of utilities on the commercial versions of UNIX that we tested . . . ranged from 15-43%." "The failure rate of the utilities on the freely-distributed Linux version of UNIX was second-lowest, at 9%." "The failure rate of the public GNU utilities was the lowest in our study, at only 7%. TN remarks: Note the clever distinction here (which Eric missed in his analysis). ``customer's eyes'' (in Microsoft's own words) rather than any real code quality. In other words, to Microsoft and the software market in general, a software product has "commercial quality" if it has the ``look and feel'' of commercial software products. A product has commercial quality code if and only if there is a public perception that it is made with commercial quality code. This means that MS will take seriously any product that has an appealing, commercial-looking appearance because MS assumes -- rightly so -- that this is what the typical, uninformed consumer uses as the judgment benchmark for what is ``good code''. TN is probably right. This didn't occur to me because, like most open-source programmers, I consider programs that crash and screw up a lot to be junk no matter how pretty their interfaces are...}
Another barrier to entry that has been tackled by OSS is project complexity. OSS teams are undertaking projects whose size & complexity had heretofore been the exclusive domain of commercial, economically-organized/motivated development teams. Examples include the Linux Operating System and Xfree86 GUI.
OSS process vitality is directly tied to the Internet to provide distributed development resources on a mammoth scale. Some examples of OSS project size:
Project |
Lines of Code |
Linux Kernel (x86 only) |
500,000 |
Apache Web Server |
80,000 |
SendMail |
57,000 |
Xfree86 X-windows server |
1.5 Million |
"K" desktop environment |
90,000 |
Full Linux distribution |
~10 Million |
The OSS process is unique in its participants' motivations and the resources that can be brought to bare down on problems. OSS, therefore, has some interesting, non-replicable assets which should be thoroughly understood.
{ TN comments:Open source software has roots in the hobbyist and the scientific community and was typified by ad hoc exchange of source code by developers/users.
Internet Software
The largest case study of OSS is the Internet. Most of the earliest code on the Internet was, and is still based on OSS as described in an interview with Tim O'Reilly (
http://www.techweb.com/internet/profile/toreilly/interview ):TIM O'REILLY: The biggest message that we started out with was, "open source software works." ... BIND has absolutely dominant market share as the single most mission-critical piece of software on the Internet. Apache is the dominant Web server. SendMail runs probably eighty percent of the mail servers and probably touches every single piece of e-mail on the Internet
Free Software Foundation / GNU Project
Credit for the first instance of modern, organized OSS is generally given to Richard Stallman of MIT. In late 1983, Stallman created the Free Software Foundation (FSF) --
http://www.gnu.ai.mit.edu/fsf/fsf.html -- with the goal of creating a free version of the UNIX operating system. The FSF released a series of sources and binaries under the GNU moniker (which recursively stands for "Gnu's Not Unix").The original FSF / GNU initiatives fell short of their original goal of creating a completely OSS Unix. They did, however, contribute several famous and widely disseminated applications and programming tools used today including:
CopyLeft Licensing
FSF/GNU software introduced the "copyleft" licensing scheme that not only made it illegal to hide source code from GNU software but also made it illegal to hide the source from work derived from GNU software. The document that described this license is known as the General Public License (GPL).
Wired magazine has the following summary of this scheme & its intent (
http://www.wired.com/wired/5.08/linux.html):The general public license, or GPL, allows users to sell, copy, and change copylefted programs - which can also be copyrighted - but you must pass along the same freedom to sell or copy your modifications and change them further. You must also make the source code of your modifications freely available.
The second clause -- open source code of derivative works -- has been the most controversial (and, potentially the most successful) aspect of CopyLeft licensing.
Open Source ProcessCommercial software development processes are hallmarked by organization around economic goals. However, since money is often not the (primary) motivation behind Open Source Software, understanding the nature of the threat posed requires a deep understanding of the process and motivation of Open Source development teams.
{ This is a very important insight, one I wish Microsoft had missed. The real battle isn't NT vs. Linux, or Microsoft vs. Red Hat/Caldera/S.u.S.E. -- it's closed-source development versus open-source. The cathedral versus the bazaar.This applies in reverse as well, which is why bashing Microsoft qua Microsoft misses the point -- they're a symptom, not the disease itself. I wish more Linux hackers understood this.
On a practical level, this insight means we can expect Microsoft's propaganda machine to be directed against the process and culture of open source, rather than specific competitors. Brace for it... }
Open Source Development Teams
Some of the key attributes of Internet-driven OSS teams:
Communication -- Internet Scale
Coordination of an OSS team is extremely dependent on Internet-native forms of collaboration. Typical methods employed run the full gamut of the Internet's collaborative technologies:
OSS projects the size of Linux and Apache are only viable if a large enough community of highly skilled developers can be amassed to attack a problem. Consequently, there is direct correlation between the size of the project that OSS can tackle and the growth of the Internet.
Common Direction
In addition to the communications medium, another set of factors implicitly coordinate the direction of the team.
Common Goals
Common goals are the equivalent of vision statements which permeate the distributed decision making for the entire development team. A single, clear directive (e.g. "recreate UNIX") is far more efficiently communicated and acted upon by a group than multiple, intangible ones (e.g. "make a good operating system").
Common Precedents
Precedence is potentially the most important factor in explaining the rapid and cohesive growth of massive OSS projects such as the Linux Operating System. Because the entire Linux community has years of shared experience dealing with many other forms of UNIX, they are easily able to discern -- in a non-confrontational manner -- what worked and what didn't.
There weren't arguments about the command syntax to use in the text editor -- everyone already used "vi" and the developers simply parcelled out chunks of the command namespace to develop.
Having historical, 20:20 hindsight provides a strong, implicit structure. In more forward looking organizations, this structure is provided by strong, visionary leadership.
{ At first glance, this just reads like a brown-nose-Bill comment by someone expecting that Gates will read the memo -- you can almost see the author genuflecting before an icon of the Fearless Leader.More generally, it suggests a serious and potentially exploitable underestimation of the open-source community's ability to enable its own visionary leaders. We didn't get Emacs or Perl or the World Wide Web from ``20:20 hindsight'' -- nor is it correct to view even the relatively conservative Linux kernel design as a backward-looking recreation of past models.
Accordingly, it suggests that Microsoft's response to open source can be wrong-footed by emphasizing innovation in both our actions and the way we represent what we're doing to the rest of the world. }
Common Skillsets NatBro points out that the need for a commonly accepted skillset as a pre-requisite for OSS development. This point is closely related to the common precedents phenomena. From his email: A key attribute ... is the common UNIX/gnu/make skillset that OSS taps into and reinforces. I think the whole process wouldn't work if the barrier to entry were much higher than it is ... a modestly skilled UNIX programmer can grow into doing great things with Linux and many OSS products. Put another way -- it's not too hard for a developer in the OSS space to scratch their itch, because things build very similarly to one another, debug similarly, etc. Whereas precedents identify the end goal, the common skillsets attribute describes the number of people who are versed in the process necessary to reach that end
The Cathedral and the Bazaar
A very influential paper by an open source software advocate -- Eric Raymond -- was first published in May 1997 (
http://www.redhat.com/redhat/cathedral-bazaar/). Raymond's paper was expressly cited by (then) Netscape CTO Eric Hahn as a motivation for their decision to release browser source code.Raymond dissected his OSS project in order to derive rules-of-thumb which could be exploited by other OSS projects in the future. Some of Raymond's rules include:
Every good work of software starts by scratching a developer's personal itch
This summarizes one of the core motivations of developers in the OSS process -- solving an immediate problem at hand faced by an individual developer -- this has allowed OSS to evolve complex projects without constant feedback from a marketing / support organization.
{ TN remarks:Good programmers know what to write. Great ones know what to rewrite (and reuse).
Raymond posits that developers are more likely to reuse code in a rigorous open source process than in a more traditional development environment because they are always guaranteed access to the entire source all the time.
Widely available open source reduces search costs for finding a particular code snippet.
``Plan to throw one away; you will, anyhow.''
Quoting Fred Brooks, ``The Mythical Man-Month'', Chapter 11. Because development teams in OSS are often extremely far flung, many major subcomponents in Linux had several initial prototypes followed by the selection and refinement of a single design by Linus.
Treating your users as co-developers is your least-hassle route to rapid code improvement and effective debugging.
Raymond advocates strong documentation and significant developer support for OSS projects in order to maximize their benefits.
Code documentation is cited as an area which commercial developers typically neglect which would be a fatal mistake in OSS.
Release early. Release often. And listen to your customers.
This is a classic play out of the Microsoft handbook. OSS advocates will note, however, that their release-feedback cycle is potentially an order of magnitude faster than commercial software's.
{ This is an interestingly arrogant statement, as if they think I was somehow inspired by the Microsoft way of binary-only releases.But it suggests something else -- that even though the author intellectually grasps the importance of source code releases, he doesn't truly grok how powerful a lever the early release specifically of source code truly is. Perhaps living within Microsoft's assumptions makes that impossible. TN comments:
The difference here is, in every release cycle Microsoft always listens to its most ignorant customers. This is the key to dumbing down each release cycle of software for further assaulting the non-PC population. Linux and OS/2 developers, OTOH, tend to listen to their customers. This necessarily limits the initial appeal of the operating system, while enhancing its long-term benefits. Perhaps only a monopolist like Microsoft could get away with selling worse products each generation -- products focused so narrowly on the least-technical member of the consumer base that they necessarily sacrifice technical excellence. Linux and OS/2 tend to appeal to the customer who knows greatness when he or she sees it.The good that Microsoft does in bringing computers to the non-users is outdone by the curse they bring upon the experienced users, because their monopoly position tends to force everyone toward the lowest-common-denominator, not just the new users.
Note: This means that Microsoft does the ``heavy lifting'' of expanding the overall PC marketplace. The great fear at Microsoft is that somebody will come behind them and make products that not only are more reliable, faster, and more secure, but are also easy to use, fun, and make people more productive. That would mean that Microsoft had merely served as a pioneer and taken all the arrows in the back, while we who have better products become a second wave to homestead on Microsoft's tamed territory." Well, sounds like a good idea to me.
So, we ought to take a page from Microsoft's book and listen to the newbies once in a while. But not so often that we lose our technological superiority over Microsoft.
ESR again. I don't agree with TN's apparent assumption that ease-of-use and technical superiority are necessarily mutually exclusive; with good design it's possible to do both. But given limited resources and poor-to-mediocre design skills, they do tend to get set in opposition with each other. Thus there's enough point to TN's analysis to make it worth reproducing here. }
Given a large enough beta-tester and co-developer base, almost every problem will be characterized quickly and the fix obvious to someone.
This is probably the heart of Raymond's insight into the OSS process. He paraphrased this rule as "debugging is parallelizable". More in depth analysis follows.
{ Well, he got that right, anyway. }Parallel Development
Once a component framework has been established (e.g. key API's & structures defined), OSS projects such as Linux utilize multiple small teams of individuals independently solving particular problems.
Because the developers are typically hobbyists, the ability to `fund' multiple, competing efforts is not an issue and the OSS process benefits from the ability to pick the best potential implementation out of the many produced.
Note, that this is very dependent on:
The core argument advanced by Eric Raymond is that unlike other aspects of software development, code debugging is an activity whose efficiency improves nearly linearly with the number of individuals tasked with the project. There are little/no management or coordination costs associated with debugging a piece of open source code -- this is the key `break' in Brooks' laws for OSS.
Raymond includes Linus Torvald's description of the Linux debugging process:
My original formulation was that every problem ``will be transparent to somebody''. Linus demurred that the person who understands and fixes the problem is not necessarily or even usually the person who first characterizes it. ``Somebody finds the problem,'' he says, ``and somebody else understands it. And I'll go on record as saying that finding it is the bigger challenge.'' But the point is that both things tend to happen quickly
Put alternately:
``Debugging is parallelizable''. Jeff [Dutky <dutky@wam.umd.edu>] observes that although debugging requires debuggers to communicate with some coordinating developer, it doesn't require significant coordination between debuggers. Thus it doesn't fall prey to the same quadratic complexity and management costs that make adding developers problematic.
One advantage of parallel debugging is that bugs and their fixes are found / propagated much faster than in traditional processes. For example, when the TearDrop IP attack was first posted to the web, less than 24 hours passed before the Linux community had a working fix available for download.
"Impulse Debugging"
An extension to parallel debugging that I'll add to Raymond's hypothesis is "impulsive debugging". In the case of the Linux OS, implicit to the act of installing the OS is the act of installing the debugging/development environment. Consequently, it's highly likely that if a particular user/developer comes across a bug in another individual's component -- and especially if that bug is "shallow" -- that user can very quickly patch the code and, via internet collaboration technologies, propagate that patch very quickly back to the code maintainer.
Put another way, OSS processes have a very low entry barrier to the debugging process due to the common development/debugging methodology derived from the GNU tools.
Conflict resolutionAny large scale development process will encounter conflicts which must be resolved. Often resolution is an arbitrary decision in order to further progress the project. In commercial teams, the corporate hierarchy + performance review structure solves this problem -- How do OSS teams resolve them?
In the case of Linux, Linus Torvalds is the undisputed `leader' of the project. He's delegated large components (e.g. networking, device drivers, etc.) to several of his trusted "lieutenants' who further de-facto delegate to a handful of "area" owners (e.g. LAN drivers).
Other organizations are described by Eric Raymond: (http://earthspace.net/~esr/writings/homesteading/homesteading-15.html):
Some very large projects discard the `benevolent dictator' model entirely. One way to do this is turn the co-developers into a voting committee (as with Apache). Another is rotating dictatorship, in which control is occasionally passed from one member to another within a circle of senior co-developers (the Perl developers organize themselves this way).
Motivation
This section provides an overview of some of the key reasons OSS developers seek to contribute to OSS projects.
Solving the Problem at Hand
This is basically a rephrasing of Raymond's first rule of thumb -- "Every good work of software starts by scratching a developer's personal itch".
Many OSS projects -- such as Apache -- started as a small team of developers setting out to solve an immediate problem at hand. Subsequent improvements of the code often stem from individuals applying the code to their own scenarios (e.g. discovering that there is no device driver for a particular NIC, etc.)
Education
The Linux kernel grew out of an educational project at the University of Helsinki. Similarly, many of the components of Linux / GNU system (X windows GUI, shell utilities, clustering, networking, etc.) were extended by individuals at educational institutions.
Ego Gratification
The most ethereal, and perhaps most profound motivation presented by the OSS development community is pure ego gratification.
In "The Cathedral and the Bazaar", Eric S. Raymond cites:
The ``utility function'' Linux hackers are maximizing is not classically economic, but is the intangible of their own ego satisfaction and reputation among other hackers.
And, of course, "you aren't a hacker until someone else calls you hacker"
Homesteading on the Noosphere
A second paper published by Raymond -- "Homesteading on the Noosphere" (http://sagan.earthspace.net/~esr/writings/homesteading/), discusses the difference between economically motivated exchange (e.g. commercial software development for money) and "gift exchange" (e.g. OSS for glory).
"Homesteading" is acquiring property by being the first to `discover' it or by being the most recent to make a significant contribution to it. The "Noosphere" is loosely defined as the "space of all work". Therefore, Raymond posits, the OSS hacker motivation is to lay a claim to the largest area in the body of work. In other words, take credit for the biggest piece of the prize.
{ This is a subtle but significant misreading. It introduces a notion of territorial `size' which is nowhere in my theory. It may be a personal error of the author, but I suspect it reflects Microsoft's competition-obsessed culture. }
From "Homesteading on the Noosphere":
Abundance makes command relationships difficult to sustain and exchange relationships an almost pointless game. In gift cultures, social status is determined not by what you control but by
...
For examined in this way, it is quite clear that the society of open-source hackers is in fact a gift culture. Within it, there is no serious shortage of the `survival necessities' -- disk space, network bandwidth, computing power. Software is freely shared. This abundance creates a situation in which the only available measure of competitive success is reputation among one's peers.
More succinctly (
http://www.techweb.com/internet/profile/eraymond/interview):SIMS: So the scarcity that you looked for was the scarcity of attention and reward?
RAYMOND: That's exactly correct.
Altruism
This is a controversial motivation and I'm inclined to believe that at some level, Altruism `degenerates' into a form of the Ego Gratification argument advanced by Raymond.
One smaller motivation which, in part, stems from altruism is Microsoft-bashing.
{ What a very fascinating admission, coming from a Microserf! Of course, he doesn't analyze why this connection exists; that might hit too close to home... }Code Forking
A key threat in any large development team -- and one that is particularly exacerbated by the process chaos of an internet-scale development team -- is the risk of code-forking.
Code forking occurs when over normal push-and-pull of a development project, multiple, inconsistent versions of the project's code base evolve.
In the commercial world, for example, the strong, singular management of the Windows NT codebase is considered to be one of it's greatest advantages over the `forked' codebase found in commercial UNIX implementations (SCO, Solaris, IRIX, HP-UX, etc.).
Forking in OSS -- BSD Unix
Within OSS space, BSD Unix is the best example of forked code. The original BSD UNIX was an attempt by U-Cal Berkeley to create a royalty-free version of the UNIX operating system for teaching purposes. However, Berkeley put severe restrictions on non-academic uses of the codebase.
{ The author's history of the BSD splits is all wrong. }
In order to create a fully free version of BSD UNIX, an ad hoc (but closed) team of developers created FreeBSD. Other developers at odds with the FreeBSD team for one reason or another splintered the OS to create other variations (OpenBSD, NetBSD, BSDI).
There are two dominant factors which led to the forking of the BSD tree:
OK, we've learned something now. This may in fact explain the couinterintuitive fact that the projects which open up development the most actually have the least tendency to fork... }
Both of these motivations create a situation where developers may try to force a fork in the code and collect royalties (monetary, or ego) at the expense of the collective BSD society.
(Lack of) Forking in Linux
In contrast to the BSD example, the Linux kernel code base hasn't forked. Some of the reasons why the integrity of the Linux codebase has been maintained include:
Linus Torvalds is a celebrity in the Linux world and his decisions are considered final. By contrast, a similar celebrity leader did NOT exist for the BSD-derived efforts.
Linus is considered by the development team to be a fair, well-reasoned code manager and his reputation within the Linux community is quite strong. However, Linus doesn't get involved in every decision. Often, sub groups resolve their -- often large -- differences amongst themselves and prevent code forking.
In contrast to BSD's closed membership, anyone can contribute to Linux and your "status" -- and therefore ability to `homestead' a bigger piece of Linux -- is based on the size of your previous contributions.
Indirectly this presents a further disincentive to code forking. There is almost no credible mechanism by which the forked, minority code base will be able to maintain the rate of innovation of the primary Linux codebase.
Because derivatives of Linux MUST be available through some free avenue, it lowers the long term economic gain for a minority party with a forked Linux tree.
Ego motivations push OSS developers to plant the biggest stake in the biggest Noosphere. Forking the code base inevitably shrinks the space of accomplishment for any subsequent developers to the new code tree.
What are the core strengths of OSS products that Microsoft needs to be concerned with?
OSS Exponential AttributesLike our Operating System business, OSS ecosystems have several exponential attributes:
The single biggest constraint faced by any OSS project is finding enough developers interested in contributing their time towards the project. As an enabler, the Internet was absolutely necessary to bring together enough people for an Operating System scale project. More importantly, the growth engine for these projects is the growth in the Internet's reach. Improvements in collaboration technologies directly lubricate the OSS engine.
Put another way, the growth of the Internet will make existing OSS projects bigger and will make OSS projects in "smaller" software categories become viable.
Like commercial software, the most viable single OSS project in many categories will, in the long run, kill competitive OSS projects and `acquire' their IQ assets. For example, Linux is killing BSD Unix and has absorbed most of its core ideas (as well as ideas in the commercial UNIXes). This feature confers huge first mover advantages to a particular project
The larger the OSS project, the greater the prestige associated with contributing a large, high quality component to its Noosphere. This phenomena contributes back to the "winner-take-all" nature of the OSS process in a given segment.
The larger the project, the more development/test/debugging the code receives. The more debugging, the more people who deploy it.
Binaries may die but source code lives forever
One of the most interesting implications of viable OSS ecosystems is long-term credibility.
Long-Term Credibility Defined
Long term credibility exists if there is no way you can be driven out of business in the near term. This forces change in how competitors deal with you.
{ TN comments:Note the terminology used here ``driven out of business''. MS believes that putting other companies out of business is not merely ``collateral damage'' -- a byproduct of selling better stuff -- but rather, a direct business goal. To put this in perspective, economic theory and the typical honest, customer-oriented businessperson will think of business as a stock-car race -- the fastest car with the most skillful driver wins. Microsoft views business as a demolition derby -- you knock out as many competitors as possible, and try to maneuver things so that your competitors wipe each other out and thereby eliminate themselves. In a stock car race there are many finishers and thus many drivers get a paycheck. In a demolition derby there is just one survivor. Can you see why ``Microsoft'' and ``freedom of choice'' are absolutely in two different universes? }
For example, Airbus Industries garnered initial long term credibility from explicit government support. Consequently, when bidding for an airline contract, Boeing would be more likely to accept short-term, non-economic returns when bidding against Lockheed than when bidding against Airbus.
Loosely applied to the vernacular of the software industry, a product/process is long-term credible if FUD tactics can not be used to combat it.
OSS is Long-Term Credible
OSS systems are considered credible because the source code is available from potentially millions of places and individuals.
{ We are deep inside the Microsoft world-view here. I realize that a typical hacker's reaction to this kind of thinking will be to find it nauseating, but it reflects a kind of instrumental ruthlessness about the uses of negative marketing that we need to learn to cope with.The really interesting thing about these two statements is that they imply that Microsoft should give up on FUD as an effective tactic against us.
Most of us have been assuming that the DOJ antitrust suit is what's keeping Microsoft from hauling out the FUD guns. But if His Gatesness bought this part of the memo, Microsoft may believe that they need to develop a more substantive response because FUD won't work.
This could be both good and bad news. The good news is that Microsoft would give up attack marketing, a weapon which in the past has been much more powerful than its distinctly inferior technology. The bad news is that, against us, giving it up would actually be better strategy; they wouldn't be wasting energy any more and might actually evolve some effective response. }
The likelihood that Apache will cease to exist is orders of magnitudes lower than the likelihood that WordPerfect, for example, will disappear. The disappearance of Apache is not tied to the disappearance of binaries (which are affected by purchasing shifts, etc.) but rather to the disappearance of source code and the knowledge base.
Inversely stated, customers know that Apache will be around 5 years from now -- provided there exists some minimal sustained interested from its user/development community.
One Apache customer, in discussing his rationale for running his e-commerce site on OSS stated, "because it's open source, I can assign one or two developers to it and maintain it myself indefinitely. "
Lack of Code-Forking Compounds Long-Term Credibility
The GPL and its aversion to code forking reassures customers that they aren't riding an evolutionary `dead-end' by subscribing to a particular commercial version of Linux.
The "evolutionary dead-end" is the core of the software FUD argument.
{ Very true -- and there's another glaring omission here. If the author had been really honest, he'd have noted that OSS advocates are well positioned to turn this argument around and beat Microsoft to death with it.By the author's own admission, OSS is bulletproof on this score. On the other hand, the exploding complexity and schedule slippage of the just-renamed ``Windows 2000'' suggest that it is an evolutionary dead end.
The author didn't go on to point that out. But we should. }
Parallel DebuggingAnd the amateurs are ``making a progressively more credible argument''. By Microsoft's own admission, we're actually winning.
Maybe there's a message about the underlying products here? }
In particular, larger, more savvy, organizations who rely on OSS for business operations (e.g. ISPs) are comforted by the fact that they can potentially fix a work-stopping bug independent of a commercial provider's schedule (for example, UUNET was able to obtain, compile, and apply the teardrop attack patch to their deployed Linux boxes within 24 hours of the first public attack)
Parallel DevelopmentAlternatively stated, "developer resources are essentially free in OSS". Because the pool of potential developers is massive, it is economically viable to simultaneously investigate multiple solutions / versions to a problem and chose the best solution in the end.
For example, the Linux TCP/IP stack was probably rewritten 3 times. Assembly code components in particular have been continuously hand tuned and refined.
OSS = `perfect' API evangelization / documentationOSS's API evangelization / developer education is basically providing the developer with the underlying code. Whereas evangelization of API's in a closed source model basically defaults to trust, OSS API evangelization lets the developer make up his own mind.
NatBro and Ckindel point out a split in developer capabilities here. Whereas the "enthusiast developer" is comforted by OSS evangelization, novice/intermediate developers --the bulk of the development community -- prefer the trust model + organizational credibility (e.g. "Microsoft says API X looks this way")
{ Whether it's really true that most developers prefer the `trust' model or not is an extremely interesting question.Twenty years of experience in the field tells me not; that, in general, developers prefer code even when their non-technical bosses are naive enough to prefer `trust'. Microsoft, obviously, wants to believe that its `organizational credibility' counts -- I detect some wishful thinking here.
On the other hand, they may be right. We in the open-source community can't afford to dismiss that possibility. I think we can meet it by developing high-quality documentation. In this way, `trust' in name authors (or in publishers of good repute such as O'Reilly or Addison-Wesley) can substitute for `trust' in an API-defining organization.) }
Release rate
Strongly componentized OSS projects are able to release subcomponents as soon as the developer has finished his code. Consequently, OSS projects rev quickly & frequently.
Open Source WeaknessesThe weaknesses in OSS projects fall into 3 primary buckets:
The biggest roadblock for OSS projects is dealing with exponential growth of management costs as a project is scaled up in terms of rate of innovation and size. This implies a limit to the rate at which an OSS project can innovate.
Starting an OSS project is difficult
From Eric Raymond:
It's fairly clear that one cannot code from the ground up in bazaar style. One can test, debug and improve in bazaar style, but it would be very hard to
Raymond `s argument can be extended to the difficulty in starting/sustaining a project if there are no clear precedent / goal (or too many goals) for the project.
Bazaar Credibility
Obviously, there are far more fragments of source code on the Internet than there are OSS communities. What separates "dead source code" from a thriving bazaar?
One article (
http://www.mibsoftware.com/bazdev/0003.htm) provides the following credibility criteria:"....thinking in terms of a hard minimum number of participants is misleading. Fetchmail and Linux have huge numbers of beta testers *now*, but they obviously both had very few at the beginning.
What both projects did have was a handful of enthusiasts and a plausible promise. The promise was partly technical (this code will be wonderful with a little effort) and sociological (if you join our gang, you'll have as much fun as we're having). So what's necessary for a bazaar to develop is that it be credible that the full-blown bazaar will exist!"
I'll posit that some of the key criteria that must exist for a bazaar to be credible include:
Post-Parity Development
When describing this problem to JimAll, he provided the perfect analogy of "chasing tail lights". The easiest way to get coordinated behavior from a large, semi-organized mob is to point them at a known target. Having the taillights provides concreteness to a fuzzy vision. In such situations, having a taillight to follow is a proxy for having strong central leadership.
Of course, once this implicit organizing principle is no longer available (once a project has achieved "parity" with the state-of-the-art), the level of management necessary to push towards new frontiers becomes massive.
{ Nonsense. In the open-source world, all it takes is one person with a good idea.Part of the point of open source is to lower the energy barriers that retard innovation. We've found by experience that the `massive management' the author extols is one of the worst of these barriers.
In the open-source world, innovators get to try anything, and the only test is whether users will volunteer to experiment with the innovation and like it once they have. The Internet facilitates this process, and the cooperative conventions of the open-source community are specifically designed to promote it.
The third alternative to ``chasing taillights'' or ``strong central leadership'' (and more effective than either) is an evolving creative anarchy, in which there are a thousand leaders and ten thousand followers linked by a web of peer review and subject to rapid-fire reality checks.
Microsoft cannot beat this. I don't think they can even really understand it, not on a gut level. }
This is possibly the single most interesting hurdle to face the Linux community now that they've achieved parity with the state of the art in UNIX in many respects.
{ The Linux community has not merely lept this hurdle, but utterly demolished it. This fact is at the core of open-source's long-term advantage over closed-source development. }
Un-sexy work
Another interesting thing to observe in the near future of OSS is how well the team is able to tackle the "unsexy" work necessary to bring a commercial grade product to life.
In the operating systems space, this includes small, essential functions such as power management, suspend/resume, management infrastructure, UI niceties, deep Unicode support, etc.
For Apache, this may mean novice-administrator functionality such as wizards.
Integrative/Architectural work
Integrative work across modules is the biggest cost encountered by OSS teams. An email memo from Nathan Myrhvold on 5/98, points out that of all the aspects of software development, integration work is most subject to Brooks' laws.
Up till now, Linux has greatly benefited from the integration / componentization model pushed by previous UNIX's. Additionally, the organization of Apache was simplified by the relatively simple, fault tolerant specifications of the HTTP protocol and UNIX server application design.
Future innovations which require changes to the core architecture / integration model are going to be incredibly hard for the OSS team to absorb because it simultaneously devalues their precedents and skillsets.
{ This prediction is of a piece with the author's earlier assertion that open-source development relies critically on design precedents and is unavoidably backward-looking. It's myopic -- apparently things like Python, Beowulf, and Squeak (to name just three of hundreds of innovative projects) don't show on his radar.We can only hope Microsoft continues to believe this, because it would hinder their response. Much will depend on how they interpret innovations such as (for example) the SMPization of the Linux kernel.
Interestingly, the author contradicts
himself on this point.
A former Microserf tells me that `throw one away' is actually pretty
close to a defined Microsoft policy, but one designed to leverage
marketing rather than fix problems. The project he was involved with
involved a web-based front-end to Exchange. The resulting first draft
(after 14 months of effort) was completely inferior to already
existing free-web-email (Yahoo, Hotmail, etc). The official response
to that was ``
He adds: Internet Explorer 5, just before one of its beta releases had
about 300K (yes, 300K) outstanding bugs targetted to be fixed before
the beta release. Much of this was accomplished by simply removing
large chunks of planned (new) functionality and pushing them to a
later (+1-2 years later) release.
}
These are weaknesses intrinsic to OSS's design/feedback methodology.
Iterative Cost
One of the key's to the OSS process is having many more iterations than commercial software (Linux was known to rev it's kernel more than once a day!). However, commercial customers tell us they want fewer revs, not more.
{ I wonder how this answer would change if Microsoft revs weren't so expensive?This is why commercial Linux distributors exist -- to mediate between the rapid-development process and customers who don't want to follow every twist of it. The kernel may rev once a day, but Red Hat only revs once in six months. }
"Non-expert" Feedback
The Linux OS is not developed for end users but rather, for other hackers. Similarly, the Apache web server is implicitly targetted at the largest, most savvy site operators, not the departmental intranet server.
The key thread here is that because OSS doesn't have an explicit marketing / customer feedback component, wishlists -- and consequently feature development -- are dominated by the most technically savvy users.
One thing that development groups at MSFT have learned time and time again is that ease of use, UI intuitiveness, etc. must be built from the ground up into a product and can not be pasted on at a later time.
{ This demands comment -- because it's so right in theory, but so hideously wrong in Microsoft practice. The wrongness implies an exploitable weakness in the implied strategy (for Microsoft) of emphasizing UI.There are two ways to build in ease of use "from the ground up". One (the Microsoft way) is to design monolithic applications that are defined and dominated by their UIs. This tends to produce ``Windowsitis'' -- rigid, clunky, bug-prone monstrosities that are all glossy surface with a hollow interior.
Programs built this way look user-friendly at first sight, but turn out to be huge time and energy sinks in the longer term. They can only be sustained by carpet-bomb marketing, the main purpose of which is to delude users into believing that (a) bugs are features, or that (b) all bugs are really the stupid user's fault, or that (c) all bugs will be abolished if the user bends over for the next upgrade. This approach is fundamentally broken.
The other way is the Unix/Internet/Web way, which is to separate the engine (which does the work) from the UI (which does the viewing and control). This approach requires that the engine and UI communicate using a well-defined protocol. It's exemplified by browser/server pairs -- the engine specializes in being an engine, and the UI specializes in being a UI.
With this second approach, overall complexity goes down and reliability goes up. Further, the interface is easier to evolve/improve/customize, precisely because it's not tightly coupled to the engine. It's even possible to have multiple interfaces tuned to different audiences.
Finally, this architecture leads naturally to applications that are enterprise-ready -- that can be used or administered remotely from the server. This approach works -- and it's the open-source community's natural way to counter Microsoft.
The key point is here is that if Microsoft wants to fight the open-source community on UI, let them -- because we can win that battle, too, fighting it our way. They can write ever-more-elaborate Windows monoliths that spot-weld you to your application-server console. We'll win if we write clean distributed applications that leverage the Internet and the Web and make the UI a pluggable/unpluggable user choice that can evolve.
Note, however, that our win depends on the existence of well-defined protocols (such as HTTP) to communicate between UIs and engines. That's why the stuff later in this memo about ``de-commoditizing protocols'' is so sinister. We need to guard against that. }
The interesting trend to observe here will be the effect that commercial OSS providers (such as RedHat in Linux space, C2Net in Apache space) will have on the feedback cycle.
How can OSS provide the service that consumers expect from software providers?
Support Model
Product support is typically the first issue prospective consumers of OSS packages worry about and is the primary feature that commercial redistributors tout.
However, the vast majority of OSS projects are supported by the developers of the respective components. Scaling this support infrastructure to the level expected in commercial products will be a significant challenge. There are many orders of magnitude difference between users and developers in IIS vs. Apache.
{ The vagueness of this last sentence is telling. Had the author continued, he would have had to acknowledge that Apache is clobbering the crap out of IIS in the marketplace (Apache's share 54% and climbing; IIS's somewhere around 14% and dropping).This would have led to a choice of unpalatable (for Microsoft) alternatives. It may be that Apache's informal user-support channels and `organizational credibility' actually produce better results than Microsoft's IIS organization can offer. If that's true, then it's hard to see in principle why the same shouldn't be true of other open-source projects.
The alternative -- that Apache is so good that it doesn't need much support or `organizational credibility' -- is even worse. That would mean that all of Microsoft's heavy-duty support and marketing battalions were just a huge malinvestment, like crumbling Stalinist apartment blocks forty years later.
These two possible explanations imply distinct but parallel strategies for open-source advocates. One is to build software that's so good it just doesn't need much support (but we'd do this anyway, and generally have). The other is to do more intensely what we're already doing along the lines of support mailing lists, newsgroups, FAQs, and other informal but extremely effective channels. A former Microserf adds: As of NT5 (sorry, Win2K :-) MS is going to claim a huge increase in IIS market share. This is because IIS5 is built directly linked with the NT kernel and handles all external TCP traffic (mail, http, etc). MSOffice is also going to communicate through IIS when talking with NT or Exchange, thus allowing them to add all internal LAN traffic to their usage reports. Let's see if we can pop their balloon before they raise it. }
For the short-medium run, this factor alone will relegate OSS products to the top tiers of the user community.
Strategic Futures
A very sublime problem which will affect full scale consumer adoption of OSS projects is the lack of strategic direction in the OSS development cycle. While incremental improvement of the current bag of features in an OSS product is very credible, future features have no organizational commitment to guarantee their development.
{ No. In the open-source community, new features are driven by the novelty- and territory-seeking behavior of individual hackers. This certainly is not a force to be despised. The Internet and the Web were built this way -- not because of `organizational commitment', but because somebody, somewhere, thought ``Hey -- this would be neat...''.Perhaps we're fortunate that `organizational credibility' looms so large in the Microsoft world-view. The time and energy they spend worrying about that and believing it's a prerequisite is resources they won't spend doing anything that might be effective against us. }
Open Source Business ModelsIn the last 2 years, OSS has taken another twist with the emergence of companies that sell OSS software, and more importantly, hiring full-time developers to improve the code base. What's the business model that justifies these salaries?
In many cases, the answers to these questions are similar to "why should I submit my protocol/app/API to a standards body?"
Secondary ServicesThe vendor of OSS-ware provides sales, support, and integration to the customer. Effectively, this transforms the OSS-ware vendor from a package goods manufacturer into a services provider.
Loss Leader -- Market EntryThe Loss Leader OSS business model can be used for two purposes:
Many OSS startups -- particularly those in Operating Systems space -- view funding the development of OSS products as a strategic loss leader against Microsoft.
Linux distributors, such as RedHat, Caldera, and others, are expressly willing to fund full time developers who release all their work to the OSS community. By simultaneously funding these efforts, Red Hat and Caldera are implicitly colluding and believe they'll make more short term revenue by growing the Linux market rather than directly competing with each other.
An indirect example is O'Reilly & Associates employment of Larry Wall -- "leader" and full time developer of PERL. The #1 publisher of PERL reference books, of course is O'Reilly & Associates.
For the short run, especially as the OSS project is at the steepest part of it's growth curve, such investments generate positive ROI. Longer term, ROI motivations may steer these developers towards making proprietary extensions rather than releasing OSS.
This is very closely related to the loss leader business model. However, instead of trying to get marginal service returns by massively growing the market, these businesses increase returns in their part of the value chain by commoditizing downstream suppliers.
The best examples of this currently are the thin server vendors such as Whistle Communications, and Cobalt Micro who are actively funding developers in SAMBA and Linux respectively.
Both Whistle and Cobalt generate their revenue on hardware volume. Consequently, funding OSS enables them to avoid today's PC market where a "tax" must be paid to the OS vendor (NT Server retail price is $800 whereas Cobalt's target MSRP is around $1000).
The earliest Apache developers were employed by cash-strapped ISPs and ICPs.
Another, more recent example is IBM's deal with Apache. By declaring the HTTP server a commodity, IBM hopes to concentrate returns in the more technically arcane application services it bundles with it's Apache distribution (as well as hope to reach Apache's tremendous market share).
First Mover -- Build Now, $$ LaterOne of the exponential qualities of OSS -- successful OSS projects swallow less successful ones in their space -- implies a pre-emption business model where by investing directly in OSS today, they can pre-empt / eliminate competitive projects later -- especially if the project requires API evangelization. This is tantamount to seizing a first mover advantage in OSS.
In addition, the developer scale, iteration rate, and reliability advantages of the OSS process are a blessing to small startups who typically can't afford a large in--house development staff.
Examples of startups in this space include SendMail.com (making a commercially supported version of the sendmail mail transfer agent) and C2Net (makes commercial and encrypted Apache)
Notice, that no case of a successful startup originating an OSS project has been observed. In both of these cases, the OSS project existed before the startup was formed.
Sun Microsystem's has recently announced that its "JINI" project will be provided via a form of OSS and may represent an application of the pre-emption doctrine.
LinuxThe next several sections analyze the most prominent OSS projects including Linux, Apache, and now, Netscape's OSS browser.
A second memo titled "Linux OS Competitive Analysis" provides an in-depth review of the Linux OS. Here, I provide a top-level summary of my findings in Linux.
What is it?Linux (pronounced "LYNN-ucks") is the #1 market share Open Source OS on the Internet. Linux is derives strongly from the 25+ years of lessons learned on the UNIX operating system.
Top-Level Features:
Like other Open Source Software (OSS) products, the real key to Linux isn't the static version of the product but rather the process around it. This process lends credibility and an air of future-safeness to customer Linux investments.
Linux is a short/medium-term threat in servers
The primary threat Microsoft faces from Linux is against NT Server.
Linux's future strength against NT server (and other UNIXes) is fed by several key factors:
To put it slightly differently: Linux can win if services are open and protocols are simple, transparent. Microsoft can only win if services are closed and protocols are complex, opaque.
To put it even more bluntly: "commodity" services and protocols are good things for customers; they promote competition and choice. Therefore, for Microsoft to win, the customer must lose.
The most interesting revelation in this memo is how close to explicitly stating this logic Microsoft is willing to come. }
Linux is unlikely to be a threat in the medium-long term on the desktop for several reasons:
Though this is true, it evades an important issue -- which is that that Microsoft's own meretriciousness on this score doesn't make its criticism any less valid. Open-source development really is poor at addressing this class of issues, because it doesn't involve systematic ease-of-use-testing with non-hackers.
This genuinely will slow down Linux's advance on the desktop. It is not likely to stall it forever, however -- not if efforts like GNOME and KDE get time to mature. }
Even granting the author's presumption, the possibility that Linux can grab a sufficient `first-mover' advantage is not safely foreclosed unless the open-source mode really is incapable of generating innovation -- and we already know that's not true. }
In addition to the attacking the general weaknesses of OSS projects (e.g. Integrative / Architectural costs), some specific attacks on Linux are:
All the standard product issues for NT vs. Sun apply to Linux.
Linux's homebase is currently commodity network and server infrastructure. By folding extended functionality (e.g. Storage+ in file systems, DAV/POD for networking) into today's commodity services, we raise the bar & change the rules of the game.
What the author is driving at is nothing less than trying to subvert the entire "commodity network and server" infrastructure (featuring TCP/IP, SMTP, HTTP, POP3, IMAP, NFS, and other open standards) into using protocols which, though they might have the same names, have actually been subverted into customer- and market-control devices for Microsoft (this is what the author really means when he exhorts Microserfs to ``raise the bar & change the rules of the game'').
The `folding extended functionality' here is a euphemism for introducing nonstandard extensions (or entire alternative protocols) which are then saturation-marketed as standards, even though they're closed, undocumented or just specified enough to create an illusion of openness. The objective is to make the new protocols a checklist item for gullible corporate buyers, while simultaneously making the writing of third-party symbiotes for Microsoft programs next to impossible. (And anyone who succeeds gets bought out.)
This game is called ``embrace and extend''. We've seen Microsoft play this game before, and they're very good at it. When it works, Microsoft wins a monopoly lock. Customers lose.
(This standards-pollution strategy is perfectly in line with Microsoft's efforts to corrupt Java and break the Java brand.)
Open-source advocates can counter by pointing out exactly how and why customers lose (reduced competition, higher costs, lower reliability, lost opportunities). Open-source advocates can also make this case by showing the contrapositive -- that is, how open source and open standards increase vendor competition, decrease costs, improve reliability, and create opportunities.
Once again, as Microsoft conceded earlier in the memo, the Internet is our poster child. Our best stop-thrust against embrace-and-extend is to point out that Microsoft is trying to close up the Internet. }
In an attempt to renew it's credibility in the browser space, Netscape has recently released and is attempting to create an OSS community around it's Mozilla source code.
Organization & LIcensingNetscape's organization and licensing model is loosely based on the Linux community & GPL with a few differences. First, Mozilla and Netscape Communicator are 2 codebases with Netscape's engineers providing synchronization.
Unlike the full GPL, Netscape reserves the final right to reject / force modifications into the Mozilla codebase and Netscape's engineers are the appointed "Area Directors" of large components (for now).
Capitalize on Anti-MSFT Sentiment in the OSS Community
Relative to other OSS projects, Mozilla is considered to be one of the most direct, near-term attacks on the Microsoft establishment. This factor alone is probably a key galvanizing factor in motivating developers towards the Mozilla codebase.
New credibility
The availability of Mozilla source code has renewed Netscape's credibility in the browser space to a small degree. As BharatS points out in
http://ie/specs/Mozilla/default.htm:"They have guaranteed by releasing their code that they will never disappear from the horizon entirely in the manner that Wordstar has disappeared. Mozilla browsers will survive well into the next 10 years even if the user base does shrink. "
Scratch a big itch
The browser is widely used / disseminated. Consequently, the pool of people who may be willing to solve "an immediate problem at hand" and/or fix a bug may be quite high.
WeaknessesPost parity development
Mozilla is already at close to parity with IE4/5. Consequently, there no strong example to chase to help implicitly coordinate the development team.
Netscape has assigned some of their top developers towards the full time task of managing the Mozilla codebase and it will be interesting to see how this helps (if at all) the ability of Mozilla to push on new ground.
Small Noosphere
An interesting weakness is the size of the remaining "Noosphere" for the OSS browser.
There are no longer any large, high-profile segments of the stand-alone browser which must be developed. In otherwords, Netscape has already solved the interesting 80% of the problem. There is little / no ego gratification in debugging / fixing the remaining 20% of Netscape's code.
Linus Torvalds' management of the Linux codebase is arguably directed towards the goal of creating the best Linux. Netscape, by contrast, expressly reserves the right to make code management decisions on the basis of Netscape's commercial / business interests. Instead of creating an important product, the developer's code is being subjugated to Netscape's stock price.
Integration Cost
Potentially the single biggest detriment to the Mozilla effort is the level of integration that customers expect from features in a browser. As stated earlier, integration development / testing is NOT a parallelizable activity and therefore is hurt by the OSS process.
In particular, much of the new work for IE5+ is not just integrating components within the browser but continuing integration within the OS. This will be exceptionally painful to compete aga inst.
PredictionsThe contention therefore, is that unlike the Apache and Linux projects which, for now, are quite successful, Netscape's Mozilla effort will:
Keeping in mind that the source code was only released a short time ago (April '98), there is already evidence of waning interest in Mozilla. EXTREMELY unscientific evidence is found in the decline in mailing list volume on Mozilla mailing lists from April to June.
Mozilla Mailing List |
April 1998 |
June 1998 |
% decline |
Feature Wishlist |
1073 |
450 |
58% |
UI Development |
285 |
76 |
73% |
General Discussion |
1862 |
687 |
63% |
Internal mirrors of the Mozilla mailing lists can be found on http://egg.Microsoft.com/wilma/lists
{ Heh. The `egg' machine, it turns out, is a Linux box. } Apache HistoryParaphrased from
http://www.apache.org/ABOUT_APACHE.htmlIn February of 1995, the most popular server software on the Web was the public domain HTTP daemon developed by NCSA, University of Illinois, Urbana-Champaign. However, development of that httpd had stalled after mid-1994, and many webmasters had developed their own extensions and bug fixes that were in need of a common distribution. A small group of these webmasters, contacted via private e-mail, gathered together for the purpose of coordinating their changes (in the form of "patches"). By the end of February `95, eight core contributors formed the foundation of the original Apache Group. In April 1995, Apache 0.6.2 was released.
During May-June 1995, a new server architecture (code-named Shambhala) was developed which included a modular structure and API for better extensibility, pool-based memory allocation, and an adaptive pre-forking process model. The group switched to this new server base in July and added the features from 0.7.x, resulting in Apache 0.8.8 (and its brethren) in August.
Less than a year after the group was formed, the Apache server passed NCSA's httpd as the #1 server on the Internet.
The Apache development team consists of about 19 core members plus hundreds of web site administrators around the world who've submitted a bug report / patch of one form or another. Apache's bug data can be found at:
http://bugs.apache.org/index.A description of the code management and dispute resolution procedures followed by the Apache team are found on
http://www.apache.org:Leadership:
There is a core group of contributors (informally called the "core") which was formed from the project founders and is augmented from time to time when core members nominate outstanding contributors and the rest of the core members agree.
Dispute resolution:
Changes to the code are proposed on the mailing list and usually voted on by active members -- three +1 (yes votes) and no -1 (no votes, or vetoes) are needed to commit a code change during a release cycle
Market Share!
Apache far and away has #1 web site share on the Internet today. Possession of the lion's share of the market provides extremely powerful control over the market's evolution.
In particular, Apache's market share in web server space presents the following competitive hurdles:
3rd Party Support
The number of tools / modules / plug-ins available for Apache has been growing at an increasing rate.
WeaknessesPerformance
In the short run, IIS soundly beats Apache on SPECweb. Moving further, as IIS moves into kernel and takes advantage deeper integration with the NT, this lead is expected to increase further.
Apache, by contrast, is saddled with the requirement to create portable code for all of its OS environments.
HTTP Protocol Complexity & Application services
Part of the reason that Apache was able to get a foothold and take off was because the HTTP protocol is so simple. As more and more features become layered on top of the humble web server (e.g. multi-server transaction support, POD, etc.) it will be interesting to see how the Apache team will be able to keep up.
ASP support, for example is a key driver for IIS in corporate intranets.
IBM & ApacheRecently, IBM announced it's support for the Apache codebase in its WebSphere application server. The actual result of the press furor is still unclear however:
Some other OSS projects:
In general, a lot more thought/discussion needs to put into Microsoft's response to the OSS phenomena. The goal of this document is education and analysis of the OSS process, consequently in this section, I present only a very superficial list of options and concerns.
Product VulnerabilitiesWhere is Microsoft most likely to feel the "pinch" of OSS projects in the near future?
Server vs. Client
The server is more vulnerable to OSS products than the client. Reasons for this include:
How can Microsoft capture some of the rabid developer mindshare being focused on OSS products?
Some initial ideas include:
What can Microsoft learn from the OSS example? More specifically: How can we recreate the OSS development environment internally? Different reviewers of this paper have consistently pointed out that internally, we should view Microsoft as an idealized OSS community but, for various reasons do not:
"a developer at Microsoft working on the OS can't scratch an itch they've got with Excel, neither can the Excel developer scratch their itch with the OS -- it would take them months to figure out how to build & debug & install, and they probably couldn't get proper source access anyway"
"People have to work on their parts independent of the rest so internal abstractions between components are well documented and well exposed/exported as well as being more robust because they have no idea how they are going to be called. The linux development system has evolved into allowing more devs to party on it without causing huge numbers of integration issues because robustness is present at every level. This is great, long term, for overall stability and it shows."
The trick of course, is to capture these benefits without incurring the costs of the OSS process. These costs are typically the reasons such barriers were erected in the first place:
Supporting a platform & development community requires a lot of service infrastructure which OSS can't provide. This includes PDC's, MSDN, ADCU, ISVs, IHVs, etc.
The OSS communities "MSDN" equivalent, of course, is a loose confederation of web sites with API docs of varying quality. MS has an opportunty to really exploit the web for developer evangelization.
Blunting OSS attacksGenerally, Microsoft wins by attacking the core weaknesses of OSS projects.
De-commoditize protocols & applications
David Stutz makes a very good point: in competing with Microsoft's level of desktop integration, "commodity protocols actually become the means of integration" for OSS projects. There is a large amount of IQ being expended in various IETF working groups which are quickly creating the architectural model for integration for these OSS projects.
{ In other words, open protocols must be locked up and the IETF crushed in order to ``de-commoditize protocols & applications'' and stop open-source software.A former Microserf adds: only half of the reason MS sends people to the W3C working groups relates to a desire to improve RFC standards. The other half is to give MS a sneak peak at upcoming standards so they can "extend" them in advance and claim that the `official' standard is `obsolete' when it emerges around the same time as their `extension'.
Once again, open-source advocates' best response is to point out to customers that when things are ``de-commoditized'', vendors gain and customers lose. }
Some examples of Microsoft initiatives which are extending commodity protocols include:
Make Integration Compelling -- Especially on the server
The rise of specialty servers is a particularly potent and dire long term threat that directly affects our revenue streams. One of the keys to combating this threat is to create integrative scenarios that are valuable on the server platform. David Stutz points out:
The bottom line here is whoever has the best network-oriented integration technologies and processes will win the commodity server business. There is a convergence of embedded systems, mobile connectivity, and pervasive networking protocols that will make the number of servers (especially "specialist servers"??) explode. The general-purpose commodity client is a good business to be in - will it be dwarfed by the special-purpose commodity server business?
Organizational Credibility
Many people provided, datapoints, proofreading, thoughtful email, and analysis on both this paper and the Linux analysis:
Nat Brown
Jim Allchin
Charlie Kindel
Ben Slivka
Josh Cohen
George Spix
David Stutz
Stephanie Ferguson
Jackie Erickson
Michael Nelson
Dwight Krossa
David D'Souza
David Treadwell
David Gunter
Oshoma Momoh
Alex Hopman
Jeffrey Robertson
Sankar Koundinya
Alex Sutton
Bernard Aboba
Revision History
Date |
Revision |
Comments |
8/03/98 |
0.95 |
|
8/10/98 |
0.97 |
Started revision table Folded in comments from JoshCo |
8/11/98 |
1.00 |
More fixes, printed copies for PaulMa review |