Menu

Where the Web Leads Us -- Responses

October 20, 1999

Dale Dougherty


We've collected a number of responses to Tim O'Reilly's article, "Open Sources, Open Standards."  In addition, the article was widely discussed on Slashdot.

Morten Christensen

Morten is a software developer (www.mortench.net).

I just want to thank and congratulate you for your very interesting insights which you shared in the article "Where the Web Leads Us" on XML.com. In particular I found your comments about new paradigms and Infoware as the new killer-apps interesting. Being a SW person myself, with a traditional focus on developing applications, your article certainly has me thinking!

Matt Welsh

Matt is one of the organizers and writers behind the Linux Documentation Project as well as an O'Reilly author. 

I saw your article on XML.com about Linux/Open Source and the development of the Web—you're right on target. The theme here resounds very much with the research I do at Berkeley to build an "open infrastructure" for scalable Internet services. We're really going beyond the Web to encompass universal information access to wireless handheld devices and the like, and everything you said makes a lot of sense even in this context. (In case you're curious about what we're up to, the project web page is http://ninja.cs.berkeley.edu).

I think it's very important to remind people that Open Source isn't just a Microsoft-killer; it's a way to develop the infrastructure for the next generation of computing. Great job!

Steve Champeon

Steve runs the Webdesign-L (mailing list) and he posted his response to that list.

If you read nothing else in the next few weeks, read this article. It's Tim O'Reilly (yes, that O'Reilly) talking about the impact of open source (e.g., BIND, Apache, Linux, Perl, PHP, Mozilla) and open standards (e.g., SGML->HTML/XML, PNG, Unicode, ECMAScript, etc.) on the future of the Internet and the Web.

He makes the point that the next "killer app" - the reason people go out and buy a computer, like VisiCalc/Lotus did for small computers in big business, and Mosiac/Netscape did for home computers a few years ago—is not a traditional software application at all, but individual Web sites, like Amazon.com or cdnow.com. I particularly like his characterization (quoted below) of "infoware":

"Traditional software embeds small amounts of information in a lot of software; infoware embeds small amounts of software in a lot of information."

He's talking about transactions—not just money, but the sorts of transactions we see every day on this list. People using a tool in order to create a small, one-time exchange of information that can potentially lead to a series of such exchanges. Community, in other words, but also commerce, constancy, and familiarity. Think of a single request for information about a colormap issue—that not only results in an answer to the immediate question at hand, but also generates a good deal of subsequent follow-up. Sometimes, such exchanges may inspire someone to invent a small application that leverages that information—like Bob Stein's Colorlab, or the various Photoshop plugins, like Todd Fahrner's WebScrub, which became the core of the Furbo Filters package. Information and software go hand in hand, they always have, but now information, software, *communication* and the exponential power of people interacting as communities—or communities interacting with each other—can make such exchanges/transactions more powerful and extend their lifetime.

It's not really mentioned in the article, but one of the open standards technologies that I consider to be "the" killer app is email. MIME, RFC822, and similar, perhaps not so much standards as "conventions", drive the Net and make it possible for places like webdesign-L to thrive. We've had some conversations recently on the list about whether the "elitism" encountered here is a matter of "content", such as the never ending question of marketing/branding vs. functionality and UI, or a matter of form, in the sense of how the content is delivered.

I am very protective of this place as a lowest-common-denominator-friendly list. The measures we take to prevent spammers from invading, to prevent people from posting HTML email, attachments, etc. are intended to ensure that, regardless of the "elitist" nature of some of the discussions, the discussions themselves are available to everyone to read, join in on, and contribute to. And, of course, there are economic considerations, such as bandwidth, given the worldwide membership, some of whom pay by the byte for their email. And I still use pine.

This could turn into a lengthy and heated discussion about whether such practices are denying the "ignorant" or "uninformed" access to such information as is purveyed here. I don't want the thread to degenerate into questions of meritocratic vs. socialist ideals, or anything of that ilk. But I do think it important that we consider the reasoning behind any act of exclusion and the efforts that are made to give people a way past those already low barriers to entry. I believe there is a line that must be drawn between making a resource "available for everyone" and making it impossible for the group to thrive. There's a great essay by Garrett Hardin, called "The Tragedy of the Commons" that I read a few years ago, in which he compares the English town commons (shared by all the shepherds and farmers, in a sort of Prisoner's Dilemma of waste and desolation driven by individual self-interest) with the problem of overpopulation.

I believe the issues raised in this essay have severe implications for artificial, online communities such as webdesign-L. I've deliberately set a few barriers to entry into this list—you have to be able to master a simple, if arcane, syntax for subscribing, you have to be able to figure out how (and why) you should configure your mail client so that it doesn't abuse the network by sending HTML mail or V-cards, etc. In short, we demand a certain demonstration of competence and familiarity with the environment of which you will be a member, before you are allowed to subscribe. Still, it's a pretty low barrier to pass, if you care enough to satisfy the basic requirements. We don't punish people for lurking, or for being on the digest (which tends to discourage posting and interactive threads, because of the time lag). Once you're in, you're in and welcome. Even if you're only here to learn and consume the experience others share openly.

Barriers to entry are one of the key topics in Tim's article, and he's talking about the change that the PC ushered into a world where hardware was king, and how the result was that Microsoft—a software company, just as IBM was (and still is) a hardware-and-services company—assumed the mantle of the leader of a new industry. To understand the impact of open source/open standards on the next generation of barrier-crushers, you must first understand how hardware (closed, proprietary) gave way to software (closed, proprietary) when suddenly the hardware became a commodity. The same thing is happening with the Web, where software is becoming the commodity and applications written on top of it are becoming the new kings (like Excite runs on Perl, or AtomZ runs on FreeBSD, or Slashdot runs on Perl, or this list runs on Majordomo, itself a Perl application, and sendmail, a free piece of software, on a free OS, Linux).

The difference here is that the basic software is already open, much like the hardware became open with the advent of the PC, and the value shift has to occur somewhere. Tim believes that the value is shifting to "infoware", or clever and well-timed applications of these open software components. The key concept here is that of the "platform". Perl is as much a part of a Web development platform as x86 assembler was fundamental to the wave of IBM PC applications.

But, warns O'Reilly, there is still a danger of even those open standards that drive email becoming closed and proprietary. Anyone who's ever dealt with MS-Exchange or tried to port SQL-based apps from Access/MS-SQL to a new platform knows the dangers. HTTP and HTML are also constantly challenged by the browser and server vendors—witness DAV, the LAYER element, IE5's DHTML behaviors, and more if you doubt.

What's also interesting, apart from the reactionary, if appropriate, message about the underlying standards themselves becoming fragmented, is the idea that these new top-level Web applications will themselves form the core of a proprietary layer, just as software became proprietary on top of open hardware specs. Just because I use Python and Yahoo uses Python doesn't mean I have the source to their Web mail system. There's a great example of how useful it would be for these applications to remain open, at Glenn Fleischmann's site (http://isbn.nu).

This site lets you track the prices of a given book or set of books, given their ISBN (International Standard Book Number) across a wide range of different commerce-driven bookstore sites. Every so often, he has to change the underlying algorithms because B&N or Borders or Amazon change the way their pages are laid out. If they all made their information open, say, with an XML-based, downloadable standard format ("isbn.dtd?") Glenn could sell advertising on the site and retire. And the highest-priced booksellers would have to provide their value elsewhere, in the form of reviews, commentary, etc. For example, there are two reviews of my book on the B&N site, and five on amazon.com, one of which is the same as the review at B&N. Fatbrain, on the other hand, has an excerpt—the first few pages of Chapter 10—online. But no reviews other than the one they themselves provide. For some reason, when people want to comment publicly, they do so at Amazon, even though it isn't the cheapest place to buy the book, nor does Amazon have the best and most complete set of info about the book itself.

One could argue that once pricing/availability/shipping/discounts (cost) information is made available, then what's to stop reviews/excerpts/code from being freely available in an open format as well? Perhaps we'll see it someday. Hell, it's great that the information is available, and can be parsed out using open source tools like Perl and PHP (which is how I generate my sales rank charts, for example: I grab the pages with lynx, parse out the sales rank and ratings with Perl, and format them with PHP, all on a Linux box running Apache.) (http://dhtml-guis.com/book/salesrank/.)

I made the source code for the JavaScript object wrappers available on my site, but also released it under the GPL and LGPL, because I know that if others use them and fine tune them, we'll all have a better set of tools for cross-platform DHTML. I want people to use, and improve, the wrappers, as long as they provide the feedback so we can all benefit.

In the old days, you know, six years ago, you had to pay Mead Data Central an arm and a leg to access their Lexis/Nexis legal database. I know; we worked with MDC and Commerce Clearing House back in the SGML days. We were pumped about the idea that they were using an open standard (SGML) to categorize and store this information, with the promise that it might be made available via HTML. (This was before XML, and, for that matter, before HTML was corrupted by the IMG tag and tables-as-layout.) They were understandably very reluctant to part with their investment, and couldn't figure out a way to make money off their databases without charging up front. I believe MDC is still acting like the jealous guard dog over its investment, to this day.

One of our clients, Oxford University Press, is going to be making the Oxford English Dictionary available via the Web someday, and is in the process of making much of their American National Biography available, a single bio at a time, via their site. (http://www.oup-usa.org/anb/). Merriam-Webster already makes their dictionary available online. It's all about very small transactions. (http://www.m-w.com).

It's happening. The value of information is in its exchange, its mode of transfer, its form, and its collection. But there's an uneasy tension between those who collect and hoard—without whom there would be nothing to exchange or distribute—and those who distribute, without whom there would be no value in the collections.

One delicious irony is that the Web—as Tim points out—is the ultimate in time-sharing computing, which went out of vogue in the early 1970s.

Anyway, I just wanted to underscore some of the relationships between what he says in his article about TCP/IP, HTTP, and HTML/XML and the state of Web applications, on the one hand, and email—the killer app that makes it possible for this community to exist, using open standards like like TCP/IP, DNS, SMTP, MIME, and plain ASCII text formatted to 78cpl.

The article wraps up with some talk about business models, and makes the point that openness is good for the entire industry. Personally, I agree, but have some reservations about the idea that we could be in the middle of two inflection points—just as closed hardware became open and enthroned software, and the networks became open and enthroned open source, to expect Web applications, the new power base built atop open software, to stay open may be too much to ask. I hope, with Tim, that the big players (like Amazon and eBay and the like) recognize that the situation has already changed, and that it is a fundamentally different change than hardware-software because of so-called "network effects". I hope that the big companies like Microsoft, whose power may well be on the wane, recognize that it is still possible to thrive under the new paradigm, just as IBM thrives today. And I hope that this network remains open for all.

So, in a nutshell, if you'll pardon the expression, that's why I block HTML email on the WebDesign-L list. Or something.

Don Box

Don is co-founder of Developmentor and one of the creators of the SOAP specification. (www.develop.com/dbox)

I just read "Where the Web leads us" on XML.com. Interesting stuff. I think you mischaracterized MS's involvement in SOAP however. SOAP is not just an MS thing. I've been involved with SOAP since 1998 when Dave Winer originally pitched the idea to MS. I definitely do not work for MS and I am a co-author on the SOAP spec. The same can be said for Dave Winer. My office-mate, Keith Brown, also doesn't work for MS and he shipped the first SOAP implementation (as Perl source no less). SOAP isn't some massive CORBA/WinDNA-like infrastructure. SOAP is just a small set of conventions that codify the existing practice of simulating RPC using XML and HTTP. These conventions are no more (or less) friendly to Windows than they are to Linux (our bits run on both). These conventions are no more (or less) friendly to IIS than they are to Apache (again, our bits run on both). These conventions are no more (or less) friendly to COM than they are to Java or Perl (our bits run on all three). In fact, SOAP is much closer to CORBA than they are to DCOM despite MS involvement. I personally look at SOAP as a reasonable way to bridge the divide between warring factions in the industry, which is why I took exception to your statements regarding the Linux community. I would rather see us all get along than for there to be a second (or third or fourth) competing protocol that offers no significant advantages other than the fact that the MS name does not appear on the spec.

Dave Winer

Dave Winer, the voice of Scripting News and the developer of Frontier, is behind the XML-RPC specification. (www.scripting.com)

It's great to have someone of O'Reilly's stature in the Unix, open source and scripting worlds, echoing what we've been saying. XML-RPC-based interapplication communication is too big a thing to be just for Microsoft.

UserLand comes from Windows and Mac, the commercial world, with our hands open. To the leaders of the Linux world, help us by competing with us. Our vision is much larger than any single software product. We want to build a new more powerful Web out of all kinds of software.

It's frustrating that the Linux world focuses so much on Microsoft, and so little on working with us. The most powerful business model possible is working together. That's the lesson of the Web. A link is such a big thing. Now let's make the links work between our software. What a revolution that would be.

Where we go from here

These days the most exciting stuff in XML-RPC is cross-network applications that run "on top" of the RPC layer. We've been doing interfaces connecting desktop writing tools to the Web, for content syndication, and search engine integration with content management systems. There's a lot more that needs to be done in business-to-business interfaces, calendar applications, electronic mail and conferencing.

The connection between O'Reilly, UserLand and Microsoft is just beginning. When it's done, my goal is that the walls between each of the components of the technology industry will be way down, if not all the way down. I want the energy of the Web to flow between all our platforms and environments, with the Internet as the transport. Open interfaces and plug-compatibility at an application level. This is bigger than any single operating system or scripting environment, it's nothing less than the next level of the Internet.

PS: An important correction. XML-RPC is not mine. It's a spec that was created 1.5 years ago by Don Box of Developmentor, Mohsen Al-Ghosein and Bob Atkinson of Microsoft and myself. But it doesn't belong to any of us, it belongs to the Web, in the same way Linux belongs to the Web. If I ever had any ownership of XML-RPC I divest of that here and now.

Alfred

Alfred is a web developer. (alfred@smtp.canberra.net.au)

Just a short note about "Business models". I work for a small ISP that develops Web applications. We only use open source software to develop with, and when we add features to a program or create new ones we release them back to the community. We make our money by having the functionality first, and providing it to our customers. And by putting our changes back into the public arena we find many other people enhancing our patches so we can then offer our customers better server, at no extra cost to us. It is a win-win situation. This email was spurred on by your article about where the Web is going, which I feel is right on the money.

Paul Everitt

Paul is one of the developers of Zope, an Open Source application server.

Thanks for chastising the Open Source community for focusing on Microsoft. Instead of spending energy trying to kill it's enemies, the movement should focus on helping its allies. I particularly liked the treatment in the end starting with:  

"This free software thing must be a bubble because we can't figure out how anybody's going to make money at it." My argument is that people are already making more money at it than we can count. So that's a very important paradigm shift and it brings me back again to this idea about where the open source community should be focusing its energy. If the frontier is in developing applications to deliver online services to people, ..."

There's plenty of money to be made at it. People just have to let go of the age of software empire building. We have spent a lot of time thinking formally about this subject and feel we have really hit on the nuances of how to build a valuable business based on Open Source software. In fact, our goal for the coming year is to become "the leader in Open Source for web development". I'm confident we'll get there—we have financial backing, demonstrated leadership in Open Source, and a category-defining Web services platform. We have written up and presented at numerous conferences a detailed view of open source business models: (http://www.digicool.com/Library/FTPB/) This talk focuses on how one does a valuation of Open Source businesses. If the question is, "What can a company be worth if they give everything away?", the answer is, "A hell of a lot."

Our story has a striking aspect. Hadar Pedhazur, our board chairman and principal at Verticality Investment Group (providers of our first round funding), convinced us to open source our software. He did this a month after the money landed. Much of this presentation is an outgrowth of his arguments about changing to a new business model. Point being, it continues to shock people that a VC would conclude that going open source was the path to success. Anyway, I hope you get a chance to look at our arguments in favor of an open source business model. Besides, there's a quote from you in there.