Internet

Introduction
An internet (lower case) is a set of interconnected computer networks.

The Internet (capitalized to distinguish it from the generic term) is an international, distributed, interoperable, packet-switched network. These characteristics enables millions of private and public computers around the world to communicate with each other. These networks are connected together with equipment such as routers, bridges, and switches. This interconnection of multiple computer networks, which otherwise would function only as a series of independent and isolated islands, gives rise to the term “Internet” as we know it today.

In March 2007 the total number of Internet users worldwide was estimated at 1.114 billion, or 16.9% of the world’s population. Internet penetration continent by continent varies from 3.6% in Africa to 69.7% in North America. There are more than 165 million websites.

Pre-Internet Networks
Earlier computer networks were centrally “switched,” so that all messages between any two points on the network were sent through the central switching computer. These networks are today called “star” networks, because there is a central point in the network &mdash; the central switching computer &mdash; that has wires “radiating” out to all other computers. Though all computer networks of the time were “star” shaped in their architecture, different switching technologies were often employed by each of them.

In the early 1960’s, the Department of Defense, like other computer network users, relied on star-shaped networks for military communication. The DoD understood, however, that those networks had at least two problems. First was that they were highly vulnerable. Anything that rendered the central switching computer inoperative &mdash; whether a bomb, sabotage, or just “down time” &mdash; would simultaneously render the entire network inoperative. Second, because different star networks used different technologies for switching messages internally, they could not communicate with each other. Messages were confined to the network from which they originated.

In 1964, a researcher at the Rand Corporation, Paul Baran, designed a computer-communications network that had no hub, no central switching station, and no governing authority. In this system, each message was cut into tiny strips and stuffed into "electronic envelopes" called packets. each marked with the address of the sender and the intended receiver. The packets were then released like confetti into the web of interconnected computers, where they were tossed back and forth over high-speed wires in the general direction of their destination and reassembled when they arrived. Baran's packet switching network, as it came to be called, became the technological underpinnings of the Internet.

Development of the Internet
The Internet developed out of research efforts funded by the U.S. Department of Defense Advanced Research Projects Agency (DARPA) (later renamed "ARPA") in the 1960s and 1970s to create and test interconnected computer networks that would not have the two drawbacks noted above. Its purpose was to allow defense contractors, universities, and DOD staff working on defense projects to communicate electronically and to share the computing resources of the few powerful, but geographically separate, computers of the time. In September 1969, a one-node packet-switched network was created at the University of California at Los Angeles (UCLA). Shortly thereafter four nodes were installed and operating effectively.

ARPA created a standard format for electronic messages that could be used between networks to connect them in spite of internal differences; and it devised an interconnection method that was based on many decentralized switching computers. Any given message would not travel over a fixed path to a central computer. Rather, it would be “switched” among many different computers until it reached its destination. The network designers set a limit on the size of a single message. If longer than that limit, a message would be broken up into smaller pieces called “packets” that would each be routed individually. This new type of network switching was called “packet switching.”

By creating a system that relied on many decentralized computers to handle message routing, rather than one central computer as was the method for star-shaped networks, ARPA produced a network that could still operate even if many of its individual computers malfunctioned or were damaged. ARPA implemented a prototype network called “ARPANET” to test out and continue development of this new technology.

By the mid-1970s, computer scientists had developed several software communications standards &mdash; or protocols &mdash; for connecting computers within the same network. At about the same time, ARPANET scientists developed a protocol for connecting different networks to each other, called the Transmission Control Protocol/Internet Protocol (“TCP/IP”) software suite. This approach requires that individual networks be connected together by gateway interface devices, called switches or routers. Thus, interconnected networks are, in effect, a series of routers connected by transmission links. Packets of data are passed from one router to another, via the transmission links.

By 1977, the ARPANET had 111 hosts. Since many universities and research facilities on the ARPANET later connected their local area networks to the ARPANET, it eventually became the core network of the ARPA Internet, an internetwork of many networks using the Transmission Control Protocol/Internet Protocol (TCP/IP) communication language as the underlying architecture. ARPANET was very important in the development of the Internet. In its time it was the largest, fastest, and most populated part of the Net.

In 1981, the NSF provided a grant to establish the Computer Science Network (CSNET) to provide networking services to all university computer scientists.

Throughout the 1970s and 1980s, the interconnection of computer networks using TCP/IP continued to grow, spurred by uses such as e-mail.

Unrelated to ARPA’s work on this packet switching technology, at about the same time the National Science Foundation (NSF) funded the creation of several supercomputer sites around the country. There were far fewer supercomputers than scientists and researchers interested in using them. NSF understood that it would be important to find ways for researchers to use these computers “remotely,” that is, without having to travel physically to the supercomputer site. NSF was aware of the work going on with the ARPANET, and determined that that network might provide the sort of access methods needed to link researchers to the supercomputers.

The military portion of ARPANET was integrated into the Defense Data Network (DDN) in the early 1980s. In 1985, NSF announced a plan to connect one hundred universities to the Internet, in addition to five already-existing supercomputer centers located around the country to provide greater access to high-end computing resources. Recognizing the increasing importance of this interconnected network to U.S. competitiveness in the sciences, however, NSF embarked on a new program with the goal of extending Internet access to every science and engineering researcher in the country.

In 1986, NSF, in conjunction with a consortium of private-sector organizations, completed a new long-distance, wide-area network, dubbed the “NSFNET” backbone. Although private entities were now involved in extending the Internet, its design still reflected ARPANET’s original goals. NSFNET connected a variety of local university networks and hence enabled nationwide access to the new supercomputer centers. NSFNET connected the supercomputer centers at 56,000 bits per second &mdash; the speed of a typical computer modem today. In a short time, the network became congested and, by 1988, its links were upgraded to 1.5 megabits per second. A variety of regional research and education networks, supported in part by NSF, were connected to the NSFNET backbone, thus extending the Internet’s reach throughout the United States.

The idea of calling this sort of network an “Internet” reflects the fact that its first use was conceived primarily to allow an interconnection among existing incompatible networks; in its early incarnations, the Internet was viewed less as a “network” for its own sake, in other words, and more as a means to connect other networks together.

Creation of NSFNET was an intellectual leap. It was the first large-scale implementation of Internet technologies in a complex environment of many independently-operated networks. NSFNET forced the Internet community to iron out technical issues arising from the rapidly increasing number of computers and to address many practical details of operations, management and conformance.

ARPANET was taken out of service in 1990, but by that time NSFNET had supplanted ARPANET as a national backbone for an "Internet" of worldwide interconnected networks. ARPANET's influence continued because TCP/IP replaced most other wide-area computer network protocols, and because its design, which provided for generality and flexibility, proved to be durable in a number of contexts. At the same time, its successful growth made clear that these design priorities no longer matched the needs of users in certain situations, particularly regarding accounting and resource management.

NSFNET usage grew dramatically, jumping from 85 million packets in January 1988 to 37 billion packets in September 1993. To handle the increasing data traffic, the NSFNET backbone became the first national 45 megabits-per-second Internet network in 1991. Throughout its existence, NSFNET carried, at no cost to institutions, any U.S. research and education traffic that could reach it.

Privitization of the Internet
By 1992, the volume of traffic on NSFNET was approaching capacity, and NSF realized it did not have the resources to keep pace with the increasing usage. Consequently, the members of the consortium formed a private, non-profit organization called Advanced Networks and Services (“ANS”) to build a new backbone with transmission lines having thirty times more capacity. For the first time, a private organization &mdash; not the government &mdash; principally owned the transmission lines and computers of a backbone. At the time that privately owned networks started appearing, general commercial activity on the NSFNET was still prohibited by an acceptable use policy. Thus, the expanding number of privately owned networks were effectively precluded from exchanging commercial data traffic with each other using the NSFNET backbone. Several commercial backbone operators circumvented this limitation in 1991, when they established the Commercial Internet Exchange (“CIX”) to interconnect their own backbones and exchange traffic directly.

In 1992, the U.S. Congress enacted legislation for the NatiNSF to allow commercial traffic on its network. Recognizing that the Internet was outpacing its ability to manage it, in May 1993, the NSF radically altered the architecture of the Internet, because the government wanted to get out of the backbone business. In its place, NSF designated a series of Network Access Points (NAPs), where private commercial backbone operators could "interconnect." In 1994, NSF announced that four NAPs would be built, in San Francisco, New York, Chicago, and Washington, D.C. The four NSF-awarded Network Access Points (NAPs), were provided by Ameritech, PacBell, Sprint, and MFS Datanet. An additional interconnection point, known as MAE-West, was provisioned by MFS Datanet on the West Coast.

Federal support for the NSFNET backbone ended on April 30, 1995. At that time, the expanding network of commercial backbones permanently replaced NSFNET, effectively privatizing the Internet.

Development of the World Wide Web
The history of NSFNET and NSF's supercomputing centers also overlapped with the rise of personal computers and the launch of the World Wide Web in 1991 by Tim Berners-Lee and colleagues at CERN, the European Organisation for Nuclear Research, in Geneva, Switzerland. The NSF centers developed many tools for organizing, locating and navigating through information, including one of the first widely used Web server applications. But perhaps the most spectacular success was Mosaic, the first freely available Web browser to allow Web pages to include both graphics and text, which was developed in 1993 by students and staff working at the NSF-supported National Center for Supercomputing Applications (NCSA) at the University of Illinois, Urbana-Champaign. In less than 18 months, NCSA Mosaic became the Web "browser of choice" for more than a million users and set off an exponential growth in the number of Web servers as well as Web surfers. Mosaic was the progenitor of modern browsers such as Microsoft Internet Explorer and Netscape Navigator.

The growth of the Internet has been fueled in large part by the popularity of the World Wide Web. The number of websites on the Internet grew from one in 1991, to 18,000 in 1995, to fifty million in 2004, and to more than one hundred million in 2006. This incredible growth has been due to several factors, including the realization by businesses that they could use the Internet for commercial purposes, the decreasing cost and increasing power of personal computers, the diminishing complexity of creating websites, and the expanding use of the Web for personal and social purposes.

From its creation to its early commercialization, most computer users connected to the Internet using a “narrowband” dial-up telephone connection and a special modem to transmit data over the telephone system’s traditional copper wires, typically at a rate of up to 56 kilobits per second (“Kbps”). Much faster “broadband” connections have subsequently been deployed using a variety of technologies. These faster technologies include coaxial cable, upgraded copper digital subscriber lines, fiber-optic cables, and wireless, satellite, and broadband over power line (BPL) technologies.

Domain Name Registration
In the years following NSFNET, NSF helped navigate the road to a self-governing and commercially viable Internet during a period of remarkable growth. The most visible, and most contentious, component of the Internet transition was the registration of domain names. Domain name registration associates a human-readable character string (such as “nsf.gov”) with Internet Protocol (IP) addresses, which computers use to locate one another.

The Department of Defense funded early domain name registration efforts because most registrants were military users and awardees. By the early 1990s, academic institutions comprised the majority of new registrations, so the Federal Networking Council (a group of government agencies involved in networking) asked NSF to assume responsibility for non-military Internet registration. When NSF awarded a five-year agreement for this service to Network Solutions, Inc. (NSI), in 1993, there were 7,500 domain names.

In September 1995, as the demand for Internet registration became largely commercial (97%) and grew by orders of magnitude, the NSF authorized NSI to charge a fee for domain name registration. Previously, NSF had subsidized the cost of registering all domain names. At that time, there were 120,000 registered domain names. In September 1998, when NSF’s agreement with NSI expired, the number of registered domain names had passed 2 million.

ICANN
The year 1998 marked the end of NSF’s direct role in the Internet. That year, the network access points and routing arbiter functions were transferred to the commercial sector. And after much debate, the Department of Commerce’s National Telecommunications and Information Administration formalized an agreement with the non-profit Internet Corporation for Assigned Names and Numbers (ICANN) for oversight of domain name registration. Today, anyone can register a domain name through a number of ICANN-accredited registrars.

Internet Organization
The Internet is not like any other technology or industry that has ever been created before. No single organization owns, manages, or controls the Internet. It is a fusion of cooperative yet independent networks. The thousands of individual networks that make up the global Internet are owned and administered by a variety of organizations, such as private companies, universities, research labs, government agencies, and municipalities. Member networks may have presidents or CEOs, but there is no single authority for the Internet as a whole.

Substantial influence over the Internet's future now resides with the Internet Society (ISOC), a voluntary membership organization whose purpose is to promote global information exchange through Internet technology.

A number of nonprofit groups keep the Internet working through their efforts at standards development and consensus building. They include:


 * Internet Society (umbrella Internet organization)
 * Internet Architecture Board (IAB) (oversees technology standards)
 * Internet Engineering Task Force (IETF) (improves technology standards)
 * Internet Research Task Force (IRTF) (research into the future of the Internet)
 * Internet Corporation for Assigned Names and Numbers (ICANN) (manages the Domain Name System and the allocation of Internet Protocol numbers)
 * VeriSign (formerly Network Solutions) (first domain registrar and still manager of the central database and accredited registers).

Internet Architecture
The Internet is often described as being comprised of multiple “layers” including: a physical layer consisting of the hardware infrastructure used to link computers to each other; a logical layer of protocols, such as TCP/IP, that control the routing of data packets; an applications layer consisting of the various programs and functions run by end users, such as a Web browser that enables Web-based e-mail; and a content layer, such as a Web page or streaming video transmission.

The layers, are increasingly complex and specific components that are superimposed on but independent from other components. The technical protocols that form the foundation of the Internet are open and flexible, so that virtually any form of network can connect to and share data with other networks through the Internet. As a result, the services provided through the Internet (such as the World Wide Web) are decoupled from the underlying infrastructure to a much greater extent than with other media. Moreover, new services (such as Internet telephony) can be introduced without necessitating changes in transmission protocols, or in the thousands of routers spread throughout the network.

The architecture of the Internet also breaks down traditional geographic notions, such as the discrete locations of senders and receivers. The Internet uses a connectionless, "adaptive" routing system, which means that a dedicated end-to-end channel need not be established for each communication. Instead, traffic is split into "packets" that are routed dynamically between multiple points based on the most efficient route at any given moment. Many different communications can share the same physical facilities simultaneously. In addition, any "host" computer connected directly to the Internet can communicate with any other host.

Data packets may potentially travel from their originating computer server across dozens of networks and through dozens of routers before they reach an Internet service provider and arrive at a destination computer. This process of disassembly, transmission, and reassembly of data packets may take as little as a fraction of a second for a simple piece of information like a text e-mail traveling along a high-speed network, or it may take several hours for a larger piece of information like a high-resolution video traveling a long distance along a low-speed network.

Today's Internet
Today, the Internet connects millions of individuals and organizations in a way that allows almost instantaneous communications using computers, computerized mobile devices, and other network attachments. End users interact with each other through an ever-expanding universe of content and applications, such as: e-mail, instant messaging, chat rooms, commercial websites for purchasing goods and services, social networking sites, Web logs (“blogs”), music and video downloads, political forums, voice over IP (“VoIP”) telephony services, streaming video applications, and multi-player network video games. Internet users include individuals of virtually all ages and walks of life, established businesses, fledgling entrepreneurs, non-profit groups, academic and government institutions, and political organizations.

Individual end users (and networks of end users) arrange for Internet access via a “last mile” connection to an Internet service provider (“ISP”), which provides, in turn, routing and connections from the ISP’s own network to the Internet. Content and applications providers offer their products and services to end users via network operators, which enable connectivity and transport into the middle, or “core,” of the Internet.

The Internet has various components: local networks, regional networks, and the various national backbone networks.

Before the turn of the century, most computer users connected to the Internet using “narrowband,” dial-up telephone connections and modems to transmit data over the telephone system’s traditional copper wirelines. Much faster “broadband” connections recently have been deployed using various technologies, including coaxial cable wirelines, upgraded copper digital subscriber lines (“DSL”), and to a lesser extent fiber-optic wirelines, wireless, satellite, and broadband over power line (“BPL”) systems.

How the Internet Works
The Internet (or any proprietary IP network) is as a set of routers connected by links. Packets of data get passed from one router to another, via links. A packet is forwarded from router to router, until it arrives at its destination. Typically, each router has several incoming links on which packets arrive, and several outgoing links on which it can send packets. When a packet shows up on an incoming link, the router will figure out on which outgoing link the packet should be forwarded. If that outgoing link is free, the packet can be sent out on it immediately. But if the outgoing link is busy transmit]ting another packet, the newly arrived packet will have to wait &mdash; it will be “buffered” in the router’s memory, waiting its turn until the outgoing link is free.

Buffering lets the router deal with temporary surges in traffic. The router will be programmed to determine which packets should be delayed and also, when the link is available, which buffered packet should be transmitted. That is, a packet prioritization scheme is devised. This could be a simple, first-in, first-out scheme or a favor-applications-sensitive-to-packet-delay scheme, or a pay-for-priority scheme, or something else. But if packets keep showing up faster than they can be sent out on some outgoing link, the number of buffered packets will grow and grow, and eventually the router will run out of buffer memory.

At that point, if one more packet shows up, the router has no choice but to discard a packet. It can discard the newly arriving packet, or it can make room for the new packet by discarding something else. But something has to be discarded. The router will be programmed to determine which packets should be dropped, thus creating a second packet prioritization scheme. Again, this could be a simple, first-in, first-out scheme or a favor-applications-sensitive-to-dropped-packets scheme, or a pay-for-priority scheme, or something else. Dropped packets can be retransmitted, but for those applications, such as voice, that require the packets to arrive and be reassembled within a short period of time, such packet recovery might not occur in the timely fashion needed to retain service quality.

With such congestion, at least two problems may occur. One problem is dropped packets. Some applications are more sensitive than others to dropped packets. A second problem is “jitter” caused by the delay of certain packets. Internet traffic is usually “bursty,” with periods of relatively low activity punctuated by occasional bursts of packets. (For example, browsing the Web generates little or no traffic while reading a page, but a burst of traffic when the browser needs to fetch a new page.)

Even if the router is programmed to minimize delay by only delaying low-priority packets when congestion absolutely requires such delay, if the high-priority traffic is bursty, then low-priority traffic will usually move through the network with little delay, but will experience noticeable delay whenever there is a burst of high-priority traffic. This on-again, off-again delay is called jitter. Jitter has no affect when downloading a big file, for which one’s concern is the average packet arrival rate rather than arrival time of a particular packet. But the quality of applications like voice conferencing or VoIP &mdash; which rely on steady streaming of interactive, realtime communication &mdash; can suffer a lot if there is jitter.

Regulating Network Traffic
Traditionally, data traffic has traversed the Internet on a “first-in-first-out” ("FIFO") and “best-efforts” basis. This protocol for data transmission was established principally as a result of DARPA’s original priority, which was to develop an effective technique for communications among existing interconnected [[networks, and which placed network survivability &mdash; or the potential for robust network operation in the face of disruption or infrastructure destruction &mdash; as the top goal in designing the overall architecture of this network of networks.

Since the Internet’s earliest days, however, computer scientists have recognized that network resources are scarce and that traffic congestion can lead to reduced performance. Although different data transmission protocols and the viability of usage-based pricing mechanisms were explored throughout the 1980s and 1990s, the debate over broadband connectivity and net neutrality is becoming increasingly strident.