A Survey of The Internet: The accidental superhighway [excerpts]

The Economist, July 1, 1995

© 1995 The Economist Newspaper Limited. All rights reserved


By CHRISTOPHER ANDERSON

Like a flock of birds  
How the Internet works without really trying

MOST people imagine that anything as complicated as, say, an economy or a global computer network must be designed and run by some central authority. Take the Internet. Those who know little about it usually reckon it is run by a company to whom you pay a subscription fee and an hourly charge to use the network. The way it works, they think - if they think about it at all - is that once you are in the system, your computer sends requests or messages to a central data centre. Big computers in this centre send out information in answer to your requests or pass your messages to other users. If you want to put some of your own material on to the network, you approach the company and strike a deal.

It all sounds perfectly plausible, but it is completely off the mark. In his book 'Being Digital' (Alfred Knopf, 1995), Nicholas Negroponte, the director of MIT's famed Media Lab, gives an example of the 'centralised mind-set' fallacy that leads to such expectations. We usually assume 'that the frontmost bird in a V-shaped flock is the one in charge and the others are playing follow-the-leader. Not so. The orderly formation is the result of a highly responsive collection of processors behaving individually and following simple harmonious rules without a conductor.'

This is as good a description of the Internet as any. For collection of processors, read all the computers and wires that make up the thousands of smaller networks connected by the Internet; for harmonious rule, read a method of transferring data called TCP/IP, stemming from the Internet's military origins (see box, next page). The rule says simply that data shall be broken up into chunks called 'packets', and that the first part of each packet shall consist of the address it should go to. That is all. What happens next is not laid down in any master plan. There is no central computer; indeed, there is no centre at all. Far from being a hub with spokes, the Internet is more like a spider's web, with many ways of getting from point A to point B.

Physically, it has little substance: most of it is simply leased space on existing telephone networks, with some dedicated computers at connection points. Forrester Research, a consultancy in Cambridge, Massachusetts, estimates that total sales of Internet-specific hardware this year will be only $ 50m; like a parasite, the Internet uses the multi-billion dollar telephone networks as its hosts and lets them carry most of the cost. That makes it largely a 'virtual' network, running on top of the physical network of the telephone companies.

This works because the Internet does not need what telephone companies consider to be their main assets: the centralised mainframes and big switches they use to control their network. Instead, the Internet uses distributed intelligence, taking advantage of the telephone companies' lines while bypassing their tollgates.

Imagine you wanted to make a telephone call from New York to Los Angeles without paying the long-distance charge. If you had the right kind of telephone and knew enough people across the country, you might call someone at the western limit of your local zone, who would call someone at the western edge of their local zone and patch you through, and so on across the country. This may be an impractical method of making telephone calls, but it hints at the way the Internet uses the telephone networks without being controlled by them.

All the content on the Internet is held in computers known as 'servers' at the edges of the network, usually owned and operated by the companies and organisations that want to distribute the information. Microsoft has servers; so do thousands of other companies (including The Economist). In response to a request, the machines parcel up data in a lot of packets with an address on each one, and send them blindly down the nearest connection to the Internet.

When they arrive on the network, they are read by a computer (called a 'router') that has a rough idea of where things are on the Internet. It reads the addresses and sends the packets in the right general direction, using the best path available at that moment. The same thing happens at the next intersection, and so on until the packets reach their destination. The network's best path from A to B at any one time may bear no relation to real-world geography. At the moment this paragraph was written, for example, the best path from London to Amsterdam, using one Internet provider, was via New Jersey. Other providers might have used different routes.

None of the routers has a map of the whole Internet; it just knows the best way to the next router at that time. That makes it impossible to predict what path a particular packet will take. It all depends on what is available at that moment; the individual packets making up a single message may end up taking different routes, only to be sewn back together again at their destination.

Free-for-all
This is the power of networked intelligence. The Internet does not need any particularly smart computers to run the show, just a lot of dumb-but-fast ones that know how to work together. The secret of its success is an idea of breathtaking simplicity. Think up a universal way for networks to share data that will work with any kind of network, of any size, carrying any kind of data, on any sort of machine. Let anyone use it, for free, with no restrictions or limitations. Then just stand back.

Networks want to connect. As Metcalfe's law states, the value of a network increases geometrically with the number of people who use it. Local area networks linking PCs within offices have been widespread for years, but isolated from each other. The Internet broke that bottleneck. It offers a standard method of transmitting data that works equally well for anything from voice to e-mail. Most importantly, it is in the public domain. Nobody owns it and nobody charges a fee for its use.

Proprietary networks using different data standards can be part of the Internet as long as they package their data to the TCP/IP standard when they meet each other. But increasingly they use TCP/IP internally, too, because otherwise they miss out on the thousands of Internet software programs. These allow the Internet to be used for things its founders never imagined, from telephone calls to live rock concerts. An open standard means more users speaking a common language, and hence a potentially huge audience, which makes it worthwhile producing such programs.

But software is just the beginning. Internet content - everything from classic books to underground music - is exploding even more spectacularly. This same critical mass of users has stimulated a creative outpouring not seen since the arrival of the PC (another open standard). Without any prospect of profit, thousands of individuals have put millions of pages on-line - anything from complete libraries of technical information to day-by-day personal diaries or mini-directories to their favourite part of the Internet. Some of them do it because the Internet has the remarkable power to make an ordinary person an on-line celebrity; it bypasses distribution channels and public-relations machines. Others do it because they see a new world emerging on the Internet, and want to contribute to it. Still others do it simply because the Internet is there, and nothing stops them. If the site is interesting enough, it might be visited by hundreds or thousands of people a day.

All this activity keeps the wires humming and attracts millions of new users each year. It also puts a great deal of pressure on the network; the companies that provide access and store data are often swamped by the relentlessly rising tide of traffic. In theory, the solution is simple: buy more and faster equipment. But that takes more money, and therein lies a problem. The Internet's economics are still stuck in its non-commercial past.

A changeling's tale

SAY Internet, and you instantly conjure up a picture of creative anarchy. Yet, incongruously, that cheerfully chaotic child was fathered by cold-war paranoia and born in a military laboratory. It started life in 1969, as ARPAnet, named after its sponsor, the Pentagon's Advanced Research Projects Agency. The aim was modest: to allow computer scientists and engineers working on military contracts all over America to share expensive computers and other resources. As an afterthought a few researchers cooked up a way of sending messages, too. 'E-mail', as it became known, quickly turned the network into a new communications link.

The Internet owes its main technical advantage to its military origins. Splitting data into tiny packets which can take different routes to their destination makes it hard to eavesdrop on messages. And a 'packet-switched' network can resist large-scale destruction, even a nuclear attack; if one route is knocked out, packets will simply travel along one that remains intact.

Until 1983 the Internet consisted of fewer than 500 'host' computers, almost exclusively in American military labs and academic computer-science departments. But the word was getting out to other academics. By 1987 the Internet had grown to include 28,000 host computers at hundreds of different universities and research labs.

Using it was still difficult and frustrating, but its power was already obvious. No other method to network universities around the world was so universal and so flexible. Internet users invented ways for many people to participate in open discussions, created software and document libraries on the network and made them accessible to all. This was exciting stuff for computer scientists and some other academics, but it remained a cloistered world.

Yet during the late 1980s, while the Internet was growing in the academic world, a networking revolution of another sort was taking place outside. Businesses realised that, having traded their mainframes for a multiplicity of PCs, they needed some way to recapture the mainframe's ability to share data and devices such as printers. So they strung wires around their offices and connected the PCs together.

These internal 'local area networks' (LANs) did more than save money; they changed the way people worked. E-mail took off within offices, and soon between them, as companies created 'wide area networks' to connect distant workplaces. But there it stopped. Different software and hardware standards used by different companies made creating wider networks a nightmare of incompatibility.

At home, PCs had made computer power affordable, and modems had allowed them to be connected up over telephone lines to commercial 'on-line' services and 'bulletin boards' - electronic discussion groups and software libraries usually set up by enthusiasts. Both of these grew steadily, but not explosively. Each had disadvantages. The networks offered by CompuServe, the leading on-line service provider, and others that followed in its wake were national, even global, but they were closed. The providers controlled what was available. Private bulletin-board systems, which had sprung up in their thousands, were unrestricted but usually confined to a small group of users near the host computer.

Around the same time the American government was relaxing its hold on its network. The National Science Foundation, a civilian agency, was now paying for most of it, originally to connect the agency's supercomputer centres, but later as a way to allow all kinds of academic and government researchers to communicate. For the first time companies were allowed to join, although not to use the network for purely commercial purposes.

Still, that was enough. Having conquered the academic world, the Internet began to serve as a connection point for commercial networks, both to reach academics and to communicate amongst themselves. 'The Net came at a unique time when the computers and networks that existed were waiting for a way to bring them together,' says John Curran, chief technical officer at Bolt Beranek and Newman, a Cambridge, Massachusetts, technology company widely credited with creating much of the early Internet. Commercial use skyrocketed as restrictions were eased, and last year companies passed universities as the dominant users. In April all remaining curbs on commercial use were lifted, and the National Science Foundation began to phase out the last direct federal subsidies for the network. The Internet had grown up.

 Freeloading as a way of life: The strange economics of cyberspace

A SIMPLE question: if the Internet runs over telephone lines, why does it cost the same to send an e-mail message around the world as it does to send it next door? Answer: no one really knows; maybe it should. Just as the Internet has overturned the conventional wisdom on building telecoms networks, so it has challenged their archaic pricing systems. Unhelpfully, though, it offers nothing more substantial than a vacuum in their place. When Internet builders lie awake at night worrying that the whole thing might fall apart, as often as not their nightmares are about its economics.

Start with the distance puzzle. The first reason why distant messages costs no more than local ones is that the Internet, although it runs on telephone lines, uses them much more frugally than voice calls do. A voice call is an analogue signal which needs a lot of electronic space to avoid interference, so it takes up an entire line for the duration of the call. By contrast, the Internet is digital, so its data bits - ones and zeros - can be compressed.

Second, and more important, the Internet data is split up into packets, which do not need a line to themselves. Packets from hundreds of sources are mixed up by the computer and shoved down the pipe in a jumble. The router at the other end of the line receives each one, reads its address and sends it in the right direction. When you make a telephone call, you are consuming a scarce and expensive resource: a whole line. When you send a message on the Internet, you are sharing a plentiful and cheap resource: the entire bandwidth on the line. Your packets are just a drop in a passing river.

But there is a third reason: telecoms pricing is a notorious scam. A large part of the price of a telephone call (often more than 40%) goes to the recipient's telephone company for taking it the last few miles. Through a complicated accounting scheme known as 'settlements', telecoms companies exchange billions of dollars each year to pay for the local component of international calls.

The Internet bypasses all this. It usually operates on leased lines, out of reach of national telecoms accounting. Even if these companies could track the traffic, they would simply find a constant stream of ones and zeros, 24 hours a day.

Digital data makes short work of all the anachronistic pricing schemes and implicit cartels of the international telecoms market. Telephone calls are priced at what the market will bear. Data transmission on leased lines is priced somewhere nearer to what it actually costs. The difference can take several noughts off the amount you pay. Talk to a friend abroad for an hour and you may be charged $ 50. Make the same call on the Internet, using software from companies such as VocalTec, and you pay nothing, or nothing more than the cost of a local phone call. Even after allowing for your monthly fee, the call costs just a few cents.

Forget economics, let's surf
One of the many remarkable things about the Internet is that once you have paid your monthly connection charge - say $ 20 - it appears to be free. Send one e-mail message around the world or send a thousand, the price is the same: $ 0. In fact, each e-mail message does cost somebody something, because it consumes a tiny bit of 'bandwidth', the capacity of the expensive data pipes that make up the Internet. But since the Internet providers have no way of billing for such infinitesimal consumption, they have settled for a rough approximation instead. They multiply the number of their subscribers by the average network usage to calculate the capacity they need to lease, which gives them a fair idea what to charge. As long as the average usage does not change much, this works.

But usage does change. Until recently the Internet has been mostly a world of text, which is an efficient way to communicate. A million bytes can capture the text of a 700-page book, but only 50 spoken words, five medium-sized pictures or three seconds of video. Yet people are now flooding the Internet mainly for the sake of byte-hungry multimedia. The World Wide Web lets users navigate the network by simply clicking on colourful screens of words, pictures, sound and video. Already the Web accounts for more than a third of the traffic on the Internet, and will soon chew up more bandwidth than any other service. Assumptions about average use may need to be revised.

This worries some network analysts. The Internet is a shared resource, like the fish in the sea. Economic theory is gloomy about these: individuals tend to exploit shared resources, but look after private ones. Eventually that leads to depletion which ruins the resource for all its users: the sea becomes fished out, the Internet gets swamped.

At the moment users have every incentive to exploit the Internet. The more they use it, the more they get for their monthly fee. And many users would rather click on data-rich pictures than read lots of frugal lines scrolling down the screen. Some analysts, such as Hal Varian and Jeffrey MacKie-Mason, two economists at the University of Michigan, now argue that those who spend all day surfing the Internet should pay more than those who just send the occasional e-mail message. Pricing, they say, should be based on usage.

That is easier said than done. At the moment, individual useage is usually not even measured, much less charged for. Counting all those packets just means more work for some overloaded computer. Likewise, the most obvious sort of usage pricing - a charge per packet - would consume more computer capacity than is needed to transmit the packets in the first place. A simpler way to charge (which is actually incorporated in the next version of TCP/IP) is to allow each packet to carry a 'priority'. High-priority packets - live transmission such as voice or video - would flow through the network without delay; low-priority packets - such as e-mail - would wait for a lull. Internet providers would charge more for high-priority items.

The reason why usage-based pricing on the Internet is so controversial is that it would abolish one of the system's big attractions: the ability to surf at whim, with no meter running. Users do not have to work out if the latest picture of someone's fish tank is worth a few cents. They can have a look because it is fun, and free. Critics fear that usage-based pricing might kill the vitality that built the Internet. Where it has been tried - in New Zealand, Chile, and some universities - usage has usually dropped.

Compromises may be possible. Users may decide at the outset what level of service they can afford. A high-priority connection would cost more than a low-priority one, but it would still be charged at a flat fee. Or providers could charge on tiered usage: users would pay a fixed monthly fee up to a certain level; if they exceeded that, they would pay more. This might discourage excessive use without that ticking-meter feeling.

There is one other option: do nothing. The Internet has come this far without complicated charging systems, and although the traffic jams today are bad, they have been worse in the past. The existing system does already include a crude form of usage pricing. Big pipes cost more than little ones: a leased line that can handle up to 64,000 bits per second can cost 50 times more than a dial-up connection at only 14,400 bits. Perhaps this is enough. In competitive markets, the cost of bandwidth is dropping almost as quickly as usage is climbing. The computers that route Internet traffic are getting faster. Maybe market forces, along with the rough limits of pipe size, will set prices at the right levels. Nobody really knows.

Lawless


Too many loopholes in the Net?
'THE net interprets censorship as damage, and routes around it.' This quote from John Gilmore, a founding member of the Electronic Frontier Foundation, often appears on the Internet. It reflects its users' confidence that their electronic world, designed to resist nuclear attack, can also shrug off government regulation. By nature of its global reach and its decentralised design, they believe, it is unpoliceable.

They may be mistaken. On April 27th the US Congress held a hearing on terrorism in the wake of the bomb that killed 167 people in a federal building in Oklahoma. Senator Edward Kennedy waved a 76-page 'Terrorist's Handbook' that his staff had downloaded from the Internet, and explained that it contained instructions for building different types of bombs, including the ammonium nitrate bomb used in Oklahoma: 'Right now we're considering a telecommunications reform bill in the Senate that is trying to do something about porn on the Internet - we should do something about this terrorist information, too.'

The telecoms reform bill the senator mentioned would do more than something about pornography on the Internet. It would criminalise the sending of any content deemed (by some unspecified definition) 'obscene, lewd, lascivious, filthy, or indecent'. The bill was passed on June 15th; similar legislation is moving through the House. In Washington state the legislature has already passed a bill that would make Internet access providers liable for any obscene content going through their lines. At least a dozen other states have proposed similar legislation.

Internet activists claim that such legislation puts unconstitutional restrictions on free speech. But that would not stop it from becoming law before it is challenged in the courts. Since screening for all obscene content is impossible, network providers in states that passed such laws might have to shut down. Even the Internet cannot route around that.

Internet providers have so far ducked responsibility for what they transmit by claiming 'common-carrier' protection. They argue that they are not like publishers, who can be held responsible for the contents of their publications, but like telephone companies, acting simply as a conduit for messages they have no knowledge of.

The current round of congressional hearings indicates that the common-carrier exemption is not enough. Somebody has to take responsibility for protecting vulnerable groups from obscene material. Usually this would be the person who had put the material on the network in the first place. But that person may live in a place where such material is perfectly permissible. And besides, it is quite easy to put material on the Internet anonymously. This leaves the Internet community with two broad options: to regulate itself or be regulated. Most Internet users prefer self-regulation, but the nuts and bolts of that are a technical nightmare.

The problem is that obscenity on the Internet can appear under an infinite number of guises. Some of them are obvious, including newsgroups with names such as alt. binaries. pictures. erotica. children, along with Web sites put up by Penthouse, Playboy and a host of amateurs. Others are harder to find: live 'keyboard sex' on Internet Relay Chat channels; secret libraries known only to porn traders; even a live video-sex service, where real women obey the typed commands of paying viewers. Cutting off the more obvious pornography newsgroups is easy, but that will merely make them adopt a heavier disguise. More generic filters are bound to fail. No computer on earth can recognise an obscene picture.

One possible self-regulating solution seems to be voluntary 'tagging' of adult material, so that specially configured software on users' computers can screen it out. Parents could set the software to view such material only with a special password. Several companies are already developing software that would allow this. Smart kids can no doubt get round it; but users hope that such gestures may yet save the Internet from being regulated to death.

Inadvertent criminals
But it is not just pornography that concerns the would-be regulators: they also paint the Internet as a haven for piracy and crime. There is something in that. Software programs by the hundreds are illegally copied on-line, and hackers are having a field day breaking into poorly guarded sites. But some of the problem also has to do with the failure of legislators to keep up with new technology.

Copyright law is having particular trouble with adjusting to the new age. It has not been able to come to terms with a unique property of digital information: the ease of making an infinite number of perfect copies, essentially for free. Copy an article, casually post it to a newsgroup, and at a keystroke you may have robbed a company of thousands of sales. For publishers who still see a threat in the photocopier, the Internet looks like the end of the world.

The problem with copyright law is that it is unable to distinguish between abuse and ordinary use. On the Internet, any number of normal activities may inadvertently break the law. The simple act of reading a document on-line often makes a copy of it in a user's hard disk. Internet providers often keep copies of popular Web sites on their local servers so their subscribers do not jam their long-distance lines. Then there are innumerable deliberate, but essentially innocent violations without a commercial motive: copying an interesting electronic article and e-mailing it to a friend, or putting it on a company LAN.

In the end copyright laws must change to reflect this new digital domain. Publishers need some assurance that their work will not be pirated to the point where they have nothing left to sell, yet a way must be found to avoid criminalising normal use.

Cyberspace has no respect for trademark law, either. An arm of the Internet Society issues 'domain' names - mcdonalds. com, for example - to companies on request. But it does not usually check to see if the person requesting it owns the trademark; indeed, when a journalist took mcdonalds. com as a prank, the company had to threaten to sue him to get it back. Worse, different companies may own the same name in different countries, yet an Internet domain is inherently global. As the writs fly, groups such as the International Trademark Association are desperately trying to find some solution.

Crime is another mess. The trouble is not so much that criminals will use the Internet, but that the police will not be able to keep track of them there. Techniques for encryption - scrambling messages so that only the intended recipient can read them - are now so advanced that they can be virtually uncrackable, even with the biggest computers. Encrypted messages make Internet wiretapping nearly impossible. This is why the American government has banned the export of strong encryption technology, and law-enforcement agencies want to set up a system which allows them to decode messages sent within the country. Internet users, fearing invasion of their privacy, have mounted the virtual barricades. But the battle has slowed the deployment of encryption technology in general, ironically making it even easier for criminals already on-line to steal information.

Crime, the maintenance of public decency and the protection of intellectual property are the problems of any mature and complex society. No doubt the Internet will find solutions to them in time; but meanwhile they act as a reminder that building a real electronic nation involves a lot more more than laying down the pipes.

The shape of nets to come

 
"There could be one, or many"

IN THE early 1970s, a new communications network began to take off in America. It bypassed traditional links and grew from the bottom up. Millions of people joined it to communicate and share information. It developed its own culture and language. Visionaries saw it unleashing creativity and opening the door to an egalitarian future. It was CB radio. By 1980 it was almost dead; it had collapsed under the weight of its own popularity, its channels drowned under a sea of noise and chaos. Could the Internet go the same way?

For parts of it, the answer seems clearly yes. In some newsgroups and mailing lists the 'signal-to-noise ratio' - the fraction of messages that are even remotely interesting - is becoming quite unrewarding. Sensible people will go elsewhere and those newsgroups will die.

But as a whole, the Internet seems safe from CB radio's fate. Where CB radio had just 40 channels, the Internet has an infinite number. Broadcast chatter is just one of its many forms of communication; the Web, moderated mailing lists and private e-mail are not as easily swamped. Technically, there is some concern about the system's ability to make the leap from a few thousand computers to tens of millions. But so far nothing has emerged that experts feel they cannot set right.

Most important, the Internet has commercial potential. A critical mass of companies have now bet heavily on it. Already some of the world's biggest telecoms companies are developing 'industrial-strength' parts of the Internet for such businesses. AT&T;, MCI, BT - along with some of the regional Baby Bells, and smaller networking companies such as BBN and PSI - are rolling out services with lots of bandwidth, security, reliability and a help desk to call when things go wrong.

There is plenty of scope for growth. At the moment fewer than 100,000 of the many millions of businesses around the globe are on the Internet. Even in America, the most wired society in the world, less than 7% of the population is connected up. And yet the network seems to be straining at the seams already, with long delays during the American business day.

For a while, more users will continue to strain the network. But much of the congestion that users chafe at is not within the network itself, but at its periphery: in the hundreds of thousands of servers owned by companies, universities and local Internet providers. This is where the Internet content is stored, spread among many private computers. Firms can reduce the queues by buying more and faster servers, or bigger links to the Internet. If they want people to come back to their sites, they will spend what it takes to keep up with demand.

Even if the jams can be avoided, though, some users fret about information overload. They see themselves surfing a sea of random facts, half-baked thoughts and blabber. But the Internet is not a database. It is a world with many facets: information of varying quality, entertainment, people and places. Just like the chaotic real world, the on-line world contains too much information to make sense of. But this is evidence not of its unmanageability, but of its vitality. Like newcomers in a foreign land, people will have to find their place on the Internet. But they will get plenty of help. Entire industries will grow up to make sense of the information available on-line, just as a plethora of guides do in the real world.

The empire strikes back
Today no one in particular owns the Internet; which is to say that hundreds of companies own small parts of it. Over the next few years a shake-out seems inevitable. A dizzying number of companies are clamouring for a share. Beyond the big telephone companies, America alone has 600 small Internet providers. Cable TV companies have suddenly realised that the Internet has bypassed their fitful interactive TV trials. Led by TCI's @Home effort, they are racing to offer Internet and other data services on their own networks.

In this new world, the commercial on-line services - mostly CompuServe, America Online and Prodigy, with the Microsoft Network and Europe Online due to be launched later this year - have an uncertain future. At the moment, the Internet explosion means that their business is booming. They can provide their subscribers with a safe home from which to explore this teeming jungle.

But already hundreds of other companies are developing software to make the Internet easier to use. Until now, the commercial services have made their money by keeping their subscribers within their own service, paying for something that they alone can offer. But content providers are looking for a better deal. There is little incentive to limit their clientele to the 3m subscribers that, say, CompuServe can offer when they can get at least 10m on the Web. Forrester Research projects that the commercial services' growth will peak around 1998; after that the growth is all Net.

Providing Internet access may soon become a tough commodity business. The companies that will continue to make money are those who offer more. Users will need - and be willing to pay for - Internet directories and guidance. Businesses will pay Internet companies to run their on-line storefronts, as some already do.

Yet the best business prospects lie not in carrying other people's bytes or running their stores, but in having something to sell yourself. Telephone companies agree, which is why so many of them are getting into the content business. In May MCI committed itself to investing up to $ 2 billion in News Corp, following the path taken by Bell Atlantic, Nynex and Pacific Telesis (which invested $ 300m to start a new content shop), Bell South, SBC Communications and Ameritech (which put $ 500m into a deal with Disney), and US West (which bought a quarter of Time Warner for $ 2.5 billion).

In the meantime, the telephone companies are fighting to regain the pipes. The explosive growth of the Internet may have caught them off-guard, but they are now believers. 'A year ago,' says Lance Boxer, MCI's data services chief, 'we thought of the Internet as an interesting model, but not as a necessary model. Now we're probably investing more in the Internet than any company has in history. We've never had an opportunity like this.'

Or a threat? If the Internet does become a 'data dialtone', regular dialtone might find itself out in the cold. It is already possible to make telephone calls on the Internet from specially-equipped computers; the spread of multimedia PCs and faster Internet connections could make this commonplace. At the same time companies are turning an increasing amount of their telephone traffic into digital data and sending it through private data networks, saving up to half their telecoms costs. Regulation permitting, this traffic could eventually move to the Internet. For the telephone companies, 'the only decision is whether they participate in the cannibalisation of their revenues or watch it happen,' says William Schraeder, president of PSI.

At the moment, neither the Internet, nor any digital network, could handle all the world's telephone calls. But MCI predicts that by the turn of the century the Internet will be carrying as much data as the voice networks. When that time comes, the Internet may have to grow up in a hurry. It will be too important to stay in the grey market of borrowed wires and back-of-the-envelope economics.

Growing up could be painful. Mr Schraeder suggests one way things could go wrong: at some point in not too many years the telephone companies have enough customers within their own parts of the Internet to launch an attack on the network. They want to go back to the telephone model that has earned them so much for so long: settlements, usage-based pricing, the works. They cut off independent networks that refuse to go along. The Internet splits in two, leaving a high-priced, orderly business network and a cheap, chaotic consumer network, with minimal interconnection between them. Thus balkanised, the Internet as we know it fades away, leaving the field to commercial networks. The cable TV and telephone companies' vision triumphs.

But there are more ways in which that vision might fail. If enough consumers choose the more chaotic, but more open, independent networks, the telecoms giants would be unable to cut them off. Or the telephone companies could simply accept the new world and the inevitable decline in voice revenues it foretells. They could, as some are doing now, embrace it and look for new ways to make money.

Ironically, in these rosier scenarios the Internet might also fade, but in a different way: it just keeps on growing, absorbing other networks of all sorts until it becomes so ubiquitous that it is simply woven into everyday life, carrying not just data, but telephone calls, television, everything. This is the information superhighway writ large - and small. In this vision, we plug into the data stream as casually as we plug into an electric socket today. Content and transmission are disaggregated; the network has turned into an open road.

This, indeed, may be the more likely future. Despite the racing pace of the Internet today, it will not happen overnight. After all, the PC, more than 15 years after its launch, is still nowhere near being considered an appliance. But ubiquitous, open networking seems as fundamental to civilisation's needs in the first half of the 21st century as ubiquitous, open roads did in the first half of the 20th. The lesson of the Internet is simple and lasting: people want to connect, with as little control and interference as possible. Call it a free market or just an efficient architecture: the power of open networking has only just begun to be felt.