|“||The Internet is at once a world-wide broadcasting capability, a mechanism for information dissemination, and a medium for collaboration and interaction between individuals and their computers without regard for geographic location.||”|
|“||The Internet is America's most important platform for economic growth, innovation, competition, free expression, and broadband investment and deployment.||”|
|“||The digital world can be an environment rich in humanity, a network not of wires but of people. The Internet, in particular, offers immense possibilities for encounter and solidarity. This is something truly good. A gift from God.||”|
- 1 Definitions
- 2 Size of the Internet
- 3 Historical development
- 4 Internet organizations
- 5 Internet architecture
- 6 Today's Internet
- 7 Vulnerabilities
- 8 Employment statistics
- 9 References
- 10 See also
- 11 External resources
- 12 External resources
Definitions[edit | edit source]
Communications Decency Act[edit | edit source]
The Internet is
|“||[t]he international computer network of both federal and nonfederal interoperable packet switched data networks.||”|
COPPA[edit | edit source]
The Internet is
|“||collectively the myriad of computer and telecommunications facilities, including equipment and operating software, which comprise the interconnected world-wide network of networks that employ the Transmission Control Protocol/Internet Protocol, or any predecessor or successor protocols to such protocol, to communicate information of all kinds by wire, radio, or other methods of transmission.||”|
Federal Networking Council[edit | edit source]
The Internet is
|“||the global information system that —
General[edit | edit source]
The Internet (capitalized to distinguish it from the generic term) is an international, distributed, interoperable, packet-switched network. These characteristics enables millions of private and public computers around the world to communicate with each other. These networks are connected together with equipment such as routers, bridges, and switches. This interconnection of multiple computer networks, which otherwise would function only as a series of independent and isolated islands, gives rise to the term "Internet" as we know it today.
Washington state law[edit | edit source]
The Internet is
|“||collectively the myriad of computer and telecommunications facilities, including equipment and operating software, that comprise the interconnected world wide network of networks that employ the transmission control protocol/internet protocol, or any predecessor or successor protocols to such protocol, to communicate information of all kinds by wire or radio.||”|
Size of the Internet[edit | edit source]
The Indexed Web contains at least 5.54 billion pages.
Historical development[edit | edit source]
|“||It seems reasonable to envision, for a time 10 or 15 years hence, a 'thinking center' that will incorporate the functions of present-day libraries together with anticipated advances in information storage and retrieval.
The picture readily enlarges itself into a network of such centers, connected to one another by wide-band communication lines and to individual users by leased-wire services. In such a system, the speed of the computers would be balanced, and the cost of the gigantic memories and the sophisticated programs would be divided by the number of users.
- — J.C.R. Licklider, Man-Computer Symbiosis (1960).
Pre-Internet networks[edit | edit source]
Earlier computer networks were centrally “switched,” so that all messages between any two points on the network were sent through the central switching computer. These networks are today called “star” networks, because there is a central point in the network — the central switching computer — that has wires “radiating” out to all other computers. Though all computer networks of the time were “star” shaped in their architecture, different switching technologies were often employed by each of them.
In the early 1960s, the Department of Defense, like other computer network users, relied on star-shaped networks for military communication. The DoD understood, however, that those networks had at least two problems. First was that they were highly vulnerable. Anything that rendered the central switching computer inoperative — whether a bomb, sabotage, or just “down time” — would simultaneously render the entire network inoperative. Second, because different star networks used different technologies for switching messages internally, they could not communicate with each other. Messages were confined to the network from which they originated.
In 1964, a researcher at the Rand Corporation, Paul Baran, designed a computer-communications network that had no hub, no central switching station, and no governing authority. In this system, each message was cut into tiny strips and stuffed into "electronic envelopes" called packets. each marked with the address of the sender and the intended receiver. The packets were then released like confetti into the web of interconnected computers, where they were tossed back and forth over high-speed wires in the general direction of their destination and reassembled when they arrived. Baran's packet switching network, as it came to be called, became the technological underpinnings of the Internet.
Development of the Internet[edit | edit source]
|“||We are on the verge of a revolution that is just as profound as the change in the economy that came with the industrial revolution. Soon electronic networks will allow people to transcend the barriers of time and distance and take advantage of global markets and business opportunities not even imaginable today, opening up a new world of economic possibility and progress.
The Internet developed out of research efforts funded by the U.S. Department of Defense Advanced Research Projects Agency (DARPA) (later renamed "ARPA") in the 1960s and 1970s to create and test interconnected computer networks that would not have the two drawbacks noted above. Its purpose was to allow defense contractors, universities, and DOD staff working on defense projects to communicate electronically and to share the computing resources of the few powerful, but geographically separate, computers of the time.
ARPA created a standard format for electronic messages that could be used between networks to connect them in spite of internal differences; and it devised an interconnection method that was based on many decentralized switching computers. Any given message would not travel over a fixed path to a central computer. Rather, it would be “switched” among many different computers until it reached its destination. The network designers set a limit on the size of a single message. If longer than that limit, a message would be broken up into smaller pieces called "packets" that would each be routed individually. This new type of network switching was called "packet switching."
By creating a system that relied on many decentralized computers to handle message routing, rather than one central computer as was the method for star-shaped networks, ARPA produced a network that could still operate even if many of its individual computers malfunctioned or were damaged. ARPA implemented a prototype network called "ARPANET" to test out and continue development of this new technology.
- Logical map of the ARPANET, April 1971
By the mid-1970s, computer scientists had developed several software communications standards — or protocols — for connecting computers within the same network. At about the same time, ARPANET scientists developed a protocol for connecting different networks to each other, called the Transmission Control Protocol/Internet Protocol ("TCP/IP") software suite. This approach requires that individual networks be connected together by gateway interface devices, called switches or routers. Thus, interconnected networks are, in effect, a series of routers connected by transmission links. Packets of data are passed from one router to another, via the transmission links.
By 1977, the ARPANET had 111 hosts. Since many universities and research facilities on the ARPANET later connected their local area networks to the ARPANET, it eventually became the core network of the ARPA Internet, an internetwork of many networks using the Transmission Control Protocol/Internet Protocol (TCP/IP) communication language as the underlying architecture. ARPANET was very important in the development of the Internet. In its time it was the largest, fastest, and most populated part of the Net.
Unrelated to ARPA's work on this packet switching technology, at about the same time the National Science Foundation (NSF) funded the creation of several supercomputer sites around the country. There were far fewer supercomputers than scientists and researchers interested in using them. NSF understood that it would be important to find ways for researchers to use these computers "remotely," that is, without having to travel physically to the supercomputer site. NSF was aware of the work going on with the ARPANET, and determined that that network might provide the sort of access methods needed to link researchers to the supercomputers.
The military portion of ARPANET was integrated into the Defense Data Network (DDN) in the early 1980s. In 1985, NSF announced a plan to connect one hundred universities to the Internet, in addition to five already-existing supercomputer centers located around the country to provide greater access to high-end computing resources. Recognizing the increasing importance of this interconnected network to U.S. competitiveness in the sciences, however, NSF embarked on a new program with the goal of extending Internet access to every science and engineering researcher in the country.
In 1986, NSF, in conjunction with a consortium of private-sector organizations, completed a new long-distance, wide-area network, dubbed the "NSFNET" backbone. Although private entities were now involved in extending the Internet, its design still reflected ARPANET's original goals. NSFNET connected a variety of local university networks and hence enabled nationwide access to the new supercomputer centers. NSFNET connected the supercomputer centers at 56,000 bits per second — the speed of a typical computer modem today. In a short time, the network became congested and, by 1988, its links were upgraded to 1.5 megabits per second. A variety of regional research and education networks, supported in part by NSF, were connected to the NSFNET backbone, thus extending the Internet’s reach throughout the United States.
The idea of calling this sort of network an "Internet" reflects the fact that its first use was conceived primarily to allow an interconnection among existing incompatible networks; in its early incarnations, the Internet was viewed less as a "network" for its own sake, in other words, and more as a means to connect other networks together.
Creation of NSFNET was an intellectual leap. It was the first large-scale implementation of Internet technologies in a complex environment of many independently-operated networks. NSFNET forced the Internet community to iron out technical issues arising from the rapidly increasing number of computers and to address many practical details of operations, management and conformance.
ARPANET was taken out of service in 1990, but by that time NSFNET had supplanted ARPANET as a national backbone for an "Internet" of worldwide interconnected networks. In 1991, the National Science Foundation lifted the restrictions on the commercial use of the Internet.
ARPANET's influence continued because TCP/IP replaced most other wide-area computer network protocols, and because its design, which provided for generality and flexibility, proved to be durable in a number of contexts. At the same time, its successful growth made clear that these design priorities no longer matched the needs of users in certain situations, particularly regarding accounting and resource management.
NSFNET usage grew dramatically, jumping from 85 million packets in January 1988 to 37 billion packets in September 1993. To handle the increasing data traffic, the NSFNET backbone became the first national 45 megabits-per-second Internet network in 1991. Throughout its existence, NSFNET carried, at no cost to institutions, any U.S. research and education traffic that could reach it.
Privitization of the Internet[edit | edit source]
|“||It took radio broadcasters 38 years to reach an audience of 50 million, television 13 years, and the Internet just four.||”|
By 1992, the volume of traffic on NSFNET was approaching capacity, and NSF realized it did not have the resources to keep pace with the increasing usage. Consequently, the members of the consortium formed a private, non-profit organization called Advanced Networks and Services (“ANS”) to build a new backbone with transmission lines having thirty times more capacity. For the first time, a private organization — not the government — principally owned the transmission lines and computers of a backbone.
At the time that privately owned networks started appearing, general commercial activity on the NSFNET was still prohibited by an acceptable use policy. Thus, the expanding number of privately owned networks were effectively precluded from exchanging commercial data traffic with each other using the NSFNET backbone. Several commercial backbone operators circumvented this limitation in 1991, when they established the Commercial Internet Exchange (“CIX”) to interconnect their own backbones and exchange traffic directly.
In 1992, the U.S. Congress enacted legislation for the NSF to allow commercial traffic on its network. Recognizing that the Internet was outpacing its ability to manage it, in May 1993, the NSF radically altered the architecture of the Internet, because the government wanted to get out of the backbone business. In its place, NSF designated a series of Network Access Points (NAPs), where private commercial backbone operators could "interconnect." In 1994, NSF announced that four NAPs would be built, in San Francisco, New York, Chicago, and Washington, D.C. The four NSF-awarded NAPs were provided by Ameritech, PacBell, Sprint, and MFS Datanet. An additional interconnection point, known as MAE-West, was provisioned by MFS Datanet on the West Coast.
Development of the World Wide Web[edit | edit source]
The history of NSFNET and NSF's supercomputing centers also overlapped with the rise of personal computers and the launch of the World Wide Web in 1991 by Tim Berners-Lee and colleagues at CERN, the European Organization for Nuclear Research, in Geneva, Switzerland. The NSF centers developed many tools for organizing, locating and navigating through information, including one of the first widely used Web server applications. But perhaps the most spectacular success was Mosaic, the first freely available Web browser to allow Web pages to include both graphics and text, which was developed in 1993 by students and staff working at the NSF-supported National Center for Supercomputing Applications (NCSA) at the University of Illinois, Urbana-Champaign. In less than 18 months, NCSA Mosaic became the Web "browser of choice" for more than a million users and set off an exponential growth in the number of Web servers as well as Web surfers. Mosaic was the progenitor of modern browsers such as Microsoft Internet Explorer and Netscape Navigator.
The growth of the Internet has been fueled in large part by the popularity of the World Wide Web. The number of websites on the Internet grew from one in 1991, to 18,000 in 1995, to fifty million in 2004, and to more than one hundred million in 2006. This incredible growth has been due to several factors, including the realization by businesses that they could use the Internet for commercial purposes, the decreasing cost and increasing power of personal computers, the diminishing complexity of creating websites, and the expanding use of the Web for personal and social purposes.
From its creation to its early commercialization, most computer users connected to the Internet using a “narrowband” dial-up telephone connection and a special modem to transmit data over the telephone system’s traditional copper wires, typically at a rate of up to 56 kilobits per second (“Kbps”). Much faster “broadband” connections have subsequently been deployed using a variety of technologies. These faster technologies include coaxial cable, upgraded copper digital subscriber lines, fiber-optic cables, and wireless, satellite, and broadband over power line (BPL) technologies.
Domain name registration[edit | edit source]
In the years following NSFNET, NSF helped navigate the road to a self-governing and commercially viable Internet during a period of remarkable growth. The most visible, and most contentious, component of the Internet transition was the registration of domain names. Domain name registration associates a human-readable character string (such as “nsf.gov”) with Internet Protocol (IP) addresses, which computers use to locate one another.
The Department of Defense funded early domain name registration efforts because most registrants were military users and awardees. By the early 1990s, academic institutions comprised the majority of new registrations, so the Federal Networking Council (a group of government agencies involved in networking) asked NSF to assume responsibility for non-military Internet registration. In January 1993, the NSF entered into a 5-year cooperative agreement with Network Solutions, Inc. (NSI), to take over the jobs of registering new, nonmilitary domain names, including those ending in .com, .net, and .org, and running the authoritative root server.
In September 1995, as the demand for Internet registration became largely commercial (97%) and grew by orders of magnitude, the NSF authorized NSI to charge a fee for domain name registration. Previously, NSF had subsidized the cost of registering all domain names. At that time, there were 120,000 registered domain names. In September 1998, when NSF’s agreement with NSI expired, the number of registered domain names had passed 2 million.
ICANN[edit | edit source]
The year 1998 marked the end of NSF’s direct role in the Internet. That year, the network access points and routing arbiter functions were transferred to the commercial sector. And after much debate, the Department of Commerce’s National Telecommunications and Information Administration formalized an agreement with the non-profit Internet Corporation for Assigned Names and Numbers (ICANN) for oversight of domain name registration. Today, anyone can register a domain name through a number of ICANN-accredited registrars.
Internet organizations[edit | edit source]
The Internet is not like any other technology or industry that has ever been created before. No single organization owns, manages, or controls the Internet. It is a fusion of cooperative yet independent networks. The thousands of individual networks that make up the global Internet are owned and administered by a variety of organizations, such as private companies, universities, research labs, government agencies, and municipalities. Member networks may have presidents or CEOs, but there is no single authority for the Internet as a whole.
Substantial influence over the Internet's future now resides with the Internet Society (ISOC), a voluntary membership organization whose purpose is to promote global information exchange through Internet technology.
A number of nonprofit groups keep the Internet working through their efforts at standards development and consensus building. They include:
- Internet Society (umbrella Internet organization)
- Internet Architecture Board (IAB) (oversees technology standards)
- Internet Engineering Task Force (IETF) (improves technology standards)
- Internet Research Task Force (IRTF) (research into the future of the Internet)
- Internet Corporation for Assigned Names and Numbers (ICANN) (manages the Domain Name System and the allocation of Internet Protocol numbers)
- VeriSign (formerly Network Solutions) (the first domain registrar and still manager of the central database and accredited registrars).
Internet architecture[edit | edit source]
The Internet is often described as being comprised of multiple “layers” including: a physical layer consisting of the hardware infrastructure used to link computers to each other; a logical layer of protocols, such as TCP/IP, that control the routing of data packets; an applications layer consisting of the various programs and functions run by end users, such as a Web browser that enables Web-based e-mail; and a content layer, such as a Web page or streaming video transmission.
The layers, are increasingly complex and specific components that are superimposed on but independent from other components. The technical protocols that form the foundation of the Internet are open and flexible, so that virtually any form of network can connect to and share data with other networks through the Internet. As a result, the services provided through the Internet (such as the World Wide Web) are decoupled from the underlying infrastructure to a much greater extent than with other media. Moreover, new services (such as Internet telephony) can be introduced without necessitating changes in transmission protocols, or in the thousands of routers spread throughout the network.
The architecture of the Internet also breaks down traditional geographic notions, such as the discrete locations of senders and receivers. The Internet uses a connectionless, adaptive routing system, which means that a dedicated end-to-end channel need not be established for each communication. Instead, traffic is split into "packets" that are routed dynamically between multiple points based on the most efficient route at any given moment. Many different communications can share the same physical facilities simultaneously. In addition, any "host" computer connected directly to the Internet can communicate with any other host.
Data packets may potentially travel from their originating computer server across dozens of networks and through dozens of routers before they reach an Internet service provider and arrive at a destination computer. This process of disassembly, transmission, and reassembly of data packets may take as little as a fraction of a second for a simple piece of information like a text e-mail traveling along a high-speed network, or it may take several hours for a larger piece of information like a high-resolution video traveling a long distance along a low-speed network.
Today's Internet[edit | edit source]
|“||The Internet has evolved in three important ways over the past decade: content has evolved from relatively static text and web pages to multimedia high-bandwidth content that requires low latency; usage has rapidly globalized; and access has moved beyond desktop computers using fixed connections to a variety of new devices using mobile broadband. Importantly, the speed of this evolution in technology and its adoption is unparalleled in human history.||”|
"No one — not even the visionaries who created the Internet a half century ago — could have imagined the extent to which digital connectivity would spur innovation, increase economic prosperity, and empower populations across the globe. Indeed, the Internet's origins in the defense community are today almost an afterthought, as its explosive growth has given it a dramatically different shape. Its creators could not have dreamed of the way and the extent to which our national and global economies have thrived, how innovations have been enabled, and how our population has been empowered by our digital connectivity."
Today, the Internet connects millions of individuals and organizations in a way that allows almost instantaneous communications using computers, computerized mobile devices, and other network attachments. In March 2007 the total number of Internet users worldwide was put at 1.114 billion, or 16.9% of the world’s population.
The Internet has grown nearly twice as fast as cable television did in its infancy, for instance, measured in terms of ad revenues and with the advent of new technologies allowing long-form video on the Web, has the capacity to emerge as a substitute for television as it presently exists.
|“||[T]he Internet has at least three hallmarks. First, it allows for promiscuous and interactive flows of information (connection). Suddenly anyone on the network can reach anyone/everyone else, often at very low cost, and without sensitivity to distance. That person can in turn respond, immediately, or on a delay. Second, by mediating user experience, the Internet is capable of generating shared objects and spaces (collaboration). People can "meet" on a website and comment or alter the text, pictures, videos, software, or other content they find there. Finally, the Internet allows for additional or at least more exquisite forms of observation and manipulation than offline analogs (control). The architecture of networks and interfaces is subject to alteration in a way that can greatly constrain human behavior, more and more of which is taking place through technology.||”|
End users interact with each other through an ever-expanding universe of content and applications, such as: e-mail, instant messaging, chat rooms, commercial websites for purchasing goods and services, social networking sites, Web logs ("blogs"), music and video downloads, political forums, voice over IP ("VoIP") telephony services, streaming video applications, and multi-player network video games. Internet users include individuals of virtually all ages and walks of life, established businesses, fledgling entrepreneurs, non-profit groups, academic and government institutions, and political organizations.
|“||Unlike traditional mass media, the Internet is global. Additionally, in contrast to the relatively high barriers to entry in traditional media marketplaces, the Internet offers commercial opportunities to an unusually large number of innovators, and the rate of new service offerings and novel business models is quite high. Taken together, these characteristics give the Internet its strength as a global open platform for innovation and expression.||”|
Individual end users (and networks of end users) arrange for Internet access via a “last mile” connection to an Internet service provider (“ISP”), which provides, in turn, routing and connections from the ISP’s own network to the Internet. Content and applications providers offer their products and services to end users via network operators, which enable connectivity and transport into the middle, or “core,” of the Internet.
Private industry — including telecommunications companies, cable companies, and Internet service providers — owns and operates the vast majority of the Internet’s infrastructure. The various networks that make up the Internet include the national backbone and regional networks, residential Internet access networks, and the networks run by individual businesses or “enterprise” networks. When a user wants to access a website or send an e-mail to someone who is connected to the Internet through a different service provider, the data must be transferred between networks. Data travels from a user’s device to the Internet through various means, such as coaxial cable, satellite, or wirelessly, to a provider’s facility where it is aggregated with other users’ traffic. Data cross between networks at Internet exchange points, which can be either hub points where multiple networks exchange data or private interconnection points. At these exchange points, computer systems called routers determine the optimal path for the data to reach their destination. Data travels through the national and regional networks and exchange points around the globe, as necessary, to reach the recipient’s Internet service provider and the recipient.
Before the turn of the century, most computer users connected to the Internet using “narrowband,” dial-up telephone connections and modems to transmit data over the telephone system’s traditional copper wirelines. Much faster “broadband” connections recently have been deployed using various technologies, including coaxial cable wirelines, upgraded copper digital subscriber lines (“DSL”), and to a lesser extent fiber-optic wirelines, wireless, satellite, and broadband over power line (“BPL”) systems.
How the Internet works[edit | edit source]
The Internet (or any proprietary IP network) is as a set of routers connected by links. Packets of data get passed from one router to another, via links. A packet is forwarded from router to router, until it arrives at its destination. Typically, each router has several incoming links on which packets arrive, and several outgoing links on which it can send packets. When a packet shows up on an incoming link, the router will figure out on which outgoing link the packet should be forwarded. If that outgoing link is free, the packet can be sent out on it immediately. But if the outgoing link is busy transmitting another packet, the newly arrived packet will have to wait — it will be “buffered” in the router’s memory, waiting its turn until the outgoing link is free.
Buffering lets the router deal with temporary surges in traffic. The router will be programmed to determine which packets should be delayed and also, when the link is available, which buffered packet should be transmitted. That is, a packet prioritization scheme is devised. This could be a simple, first-in, first-out scheme or a favor-applications-sensitive-to-packet-delay scheme, or a pay-for-priority scheme, or something else. But if packets keep showing up faster than they can be sent out on some outgoing link, the number of buffered packets will grow and grow, and eventually the router will run out of buffer memory.
At that point, if one more packet shows up, the router has no choice but to discard a packet. It can discard the newly arriving packet, or it can make room for the new packet by discarding something else. But something has to be discarded. The router will be programmed to determine which packets should be dropped, thus creating a second packet prioritization scheme. Again, this could be a simple, first-in, first-out scheme or a favor-applications-sensitive-to-dropped-packets scheme, or a pay-for-priority scheme, or something else. Dropped packets can be retransmitted, but for those applications, such as voice, that require the packets to arrive and be reassembled within a short period of time, such packet recovery might not occur in the timely fashion needed to retain service quality.
With such congestion, at least two problems may occur. One problem is dropped packets. Some applications are more sensitive than others to dropped packets. A second problem is “jitter” caused by the delay of certain packets. Internet traffic is usually “bursty,” with periods of relatively low activity punctuated by occasional bursts of packets. (For example, browsing the Web generates little or no traffic while reading a page, but a burst of traffic when the browser needs to fetch a new page.)
Even if the router is programmed to minimize delay by only delaying low-priority packets when congestion absolutely requires such delay, if the high-priority traffic is bursty, then low-priority traffic will usually move through the network with little delay, but will experience noticeable delay whenever there is a burst of high-priority traffic. This on-again, off-again delay is called jitter. Jitter has no affect when downloading a big file, for which one’s concern is the average packet arrival rate rather than arrival time of a particular packet. But the quality of applications like voice conferencing or VoIP — which rely on steady streaming of interactive, realtime communication — can suffer a lot if there is jitter.
Layering of the Internet[edit | edit source]
Many may consider the Internet and World Wide Web (web) to be synonymous; they are not. Rather, the web is one portion of the Internet, and a medium through which information may be accessed. In conceptualizing the web, some may view it as consisting solely of the websites accessible through a traditional search engine such as Google. However, this content — known as the "Surface Web — is only one portion of the web. The Deep Web refers to a class of content on the Internet that, for various technical reasons, is not indexed by search engines, and thus would not be accessible through a traditional search engine. Information on the Deep Web includes content on private intranets (internal networks such as those at corporations, government agencies, or universities), commercial databases like Lexis Nexis or Westlaw, or sites that produce content via search queries or forms. Going even further into the web, the Dark Web is the segment of the Deep Web that has been intentionally hidden. The Dark Web is a general term that describes hidden Internet sites that users cannot access without using special software. Users access the Dark Web with the expectation of being able to share information and/or files with little risk of detection.
- Surface Web. The magnitude of the web is growing. In the United States alone, about 100,000 new web domains are reportedly registered every day. Simultaneously, it is estimated that 40,000-70,000 web domains go offline each day. If these estimates are accurate, there are at least 30,000 web domains added daily.
- Deep Web. The Deep Web cannot be accessed by traditional search engines because the content in this layer of the web is not indexed. Information here is not "static and linked to other pages" as is information on the Surface Web. As researchers have noted, “[i]t’s almost impossible to measure the size of the Deep Web. While some early estimates put the size of the Deep Web at 4,000–5,000 times larger than the surface web, the changing dynamic of how information is accessed and presented means that the Deep Web is growing exponentially and at a rate that defies quantification.”12
- Dark Web. Within the Deep Web, the Dark Web is also growing as new tools make it easier to navigate.13 Because individuals may access the Dark Web assuming little risk of detection, they may use this arena for a variety of legal and illegal activities. It is unclear, however, how much of the Deep Web is taken up by Dark Web content and how much of the Dark Web is used for legal or illegal activities.
Regulating network traffic[edit | edit source]
Traditionally, data traffic has traversed the Internet on a “first-in first-out” ("FIFO") and “best-efforts” basis. This protocol for data transmission was established principally as a result of DARPA’s original priority, which was to develop an effective technique for communications among existing interconnected networks, and which placed network survivability — or the potential for robust network operation in the face of disruption or infrastructure destruction — as the top goal in designing the overall architecture of this network of networks.
Since the Internet’s earliest days, however, computer scientists have recognized that network resources are scarce and that traffic congestion can lead to reduced performance. Although different data transmission protocols and the viability of usage-based pricing mechanisms were explored throughout the 1980s and 1990s, the debate over broadband connectivity and net neutrality is becoming increasingly strident.
Compared to traditional media[edit | edit source]
It is useful to describe certain key features of the Internet medium and to compare it to other, more traditional media.
- The Internet supports many-to-many connectivity. A single user can receive information and content from a large number of different sources, and can also transmit his or her content to a large number of recipients (one-to-many). Or a single user can engage with others in a one-to-one mode. Or multiple users can engage with many others (many-to-many). Broadcast media such as television and radio as well as print are one-to-many media — one broadcast station or publisher sends to many recipients. Telephony is inherently one-to-one, although party lines and conference calling change this characterization of telephones to some extent.
- The Internet supports a high degree of interactivity. Thus, when the user is searching for content (and the search strategy is a good one), the content that he or she receives can be more explicitly customized to his or her own needs. In this regard, the Internet is similar to a library in which the user can make an information request that results in the production of books and other media relevant to that request. By contrast, user choices with respect to television and film are largely limited to the binary choice of "accept or do not accept a channel," and all a user has to do to receive content is to turn on the television. The telephone is an inherently interactive medium, but one without the many-to-many connectivity of the Internet.
- The Internet is highly decentralized. Indeed, the basic design philosophy underlying the Internet has been to push management decisions to as decentralized a level as possible. Thus, if one imagines the Internet as a number of communicating users with infrastructure in the middle facilitating that communication, management authority rests mostly (but not exclusively) with the users rather than the infrastructure — which is simply a bunch of pipes that carry whatever traffic the users wish to send and receive. (How long this decentralization will last is an open question.) By contrast, television and the telephone operate under a highly centralized authority and facilities. Furthermore, the international nature of the Internet makes it difficult for one governing board to gain the consensus necessary to impose policy, although a variety of transnational organizations are seeking to address issues of Internet governance globally.
- The Internet is intrinsically a highly anonymous medium. That is, nothing about the way in which messages and information are passed through the Internet requires identification of the party doing the sending. One important consequence of the Internet's anonymity is that it is quite difficult to differentiate between adult and minor users of the Internet. A second consequence is that technological approaches that seek to differentiate between adults and minors generally entail some loss of privacy for adults who are legitimate customers of certain sexually explicit materials to which minors do not have legitimate access.
- The capital costs of becoming an Internet publisher are relatively low, and thus anyone can establish a global Web presence at the cost of a few hundred dollars (as long as it conforms to the terms of service of the Web host). Further, for the cost of a subscription to an Internet service provider (ISP), one can interact with others through instant messages and e-mail without having to establish a Web presence at all. The costs of reaching a large, geographically dispersed audience may be about the same as those required to reach a small, geographically limited audience, and in any event do not rise proportionately with the size of the audience.
- Because nearly anyone can put information onto the Internet, the appropriateness, utility, and even veracity of information on the Internet are generally uncertified and hence unverified. With important exceptions (generally associated with institutions that have reputations to maintain), the Internet is a "buyer beware" information marketplace, and the unwary user can be misinformed, tricked, and seduced or led astray when he or she encounters information publishers that are not reputable.
- The Internet is a highly convenient medium, and is becoming more so. Given the vast information resources that it offers coupled with search capabilities for finding many things quickly, it is no wonder that for many people the Internet is the information resource of first resort.
Vulnerabilities[edit | edit source]
The architecture of the ARPANET, on which the Internet is based, assumes that the entities connected to it are in fixed locations and can be trusted; consequently, the design is open and vulnerable. Today, entities connected to the Internet face constant attacks that may be launched from anywhere in the world. Federal, State, and local governments, industry, and consumers already spend billions of dollars each year on preventing and recovering from attacks. The number of attacks and their potential to cause damage are both expected to rise in the years ahead. The connection of millions of mobile devices to the traffic load further stresses the Internet, increasing its fragility and the difficulty of managing its complexity.
Because vital interests of the United States now depend on secure, reliable, high-speed Internet connectivity, Internet vulnerabilities and limitations are a growing national security problem. They also complicate the development of next-generation networking applications that are important to Federal missions and society at large.
|“||The infrastructure of the Internet is another possible terrorist target, and given the Internet's public prominence, it may appeal to terrorists as an attractive target. The Internet could be seriously degraded for a relatively short period of time by a denial-of-service attack, but such impact is unlikely to be long lasting. The Internet itself is a densely connected network of networks that automatically routes around links that become unavailable, which means that a large number of important nodes would have to be destroyed simultaneously to bring it down for an extended period of time. Destruction of some key Internet nodes could result in reduced network capacity and slow traffic across the Internet, but the ease with which Internet communications can be rerouted would minimize the long-term damage.||”|
Employment statistics[edit | edit source]
The Internet is creating new kinds of jobs. Between 1998 and 2008, the number of domestic IT jobs grew by 26 percent, four times faster than U.S. employment as a whole. According to one estimate, as of 2009, advertising-supported Internet services directly or indirectly employed three million Americans, 1.2 million of whom hold jobs that did not exist two decades ago. By 2018, IT employment is expected to grow by another 22 percent.
References[edit | edit source]
- Barry M. Leiner et al., "A Brief History of the Internet," at 1 (full-text).
- Federal Communications Commission, Protecting and Promoting the Open Internet NPRM (May 15, 2014) (full-text).
- Message of Pope Francis for the 48th World Communications Day (June 1, 2014) (full-text).
- 47 U.S.C. §230.
- 16 C.F.R. § 312.2.
- FNC Resolution: Definition of "Internet" (Oct. 24, 1995) (full-text).
- Centripetal Networks, Inc. v. Cisco Sys, Inc., 2020 WL 5887916, at *6 (E.D. Va. Oct. 5, 2020).
- Wash. Rev. Code 19.190.010(9).
- Internet Live Stats, Internet Users (full-text).
- Internet World Stats.
- The Size of the World Wide Wed (the Internet) (full text).
- See generally David D. Clark, "The Design Philosophy of the DARPA Internet Protocols," Computer Comm. Rev., Aug. 1988, at 106 (full-text); Barry M. Leiner, et al., A Brief History of the Internet (full-text).
- "A widely held view is that the Internet was funded by the Department of Defense to create a network that would survive a nuclear attack. This view is false, an urban myth, which persists to this day. The true motivation for creating the Internet back then was . . . to allow us to share resources across the net so that we could conduct research in computer science. Leonard Kleinrock, "The Internet Rules of Engagement: Then and Now," at note 3. (full-text).
- The four-node network was completed December 5, 1969, and connected the University of California, Los Angeles (UCLA), the Stanford Research Institute, the University of California–Santa Barbara, and the University of Utah. UCLA sent the first transmission to the Stanford Research Institute on October 29, 1969 at 22:30 PST. Mitch Waldrop, "DARPA and the Internet Revolution," in DARPA: 50 Years of Bridging the Gap 83 (2008) (full-text).
- The first network e-mail using the “username@hostname” format was sent in 1971. Id.
- "[N]o one master computer [was] responsible for sorting the packets and routing them to their destination." Id.
- "However, this redundancy has its limits; only a finite number of paths connect any given point to the rest of the system. Also, geography and economics mean that some locations have a high concentration of Internet facilities while others only have few." The Internet Under Crisis Conditions: Learning from September 11, at 12.
- Barry M. Leiner, supra ("Thus, by 1985, Internet was already well established as a technology supporting a broad community of researchers and developers, and was beginning to be used by other communities for daily computer communications. Electronic mail was being used broadly across several communities.").
- Michael Kende, The Digital Handshake: Connecting Internet Backbones 5 (FCC Office of Plans and Policy, Working Paper No. 32, 2000).
- See generally World Wide Web Consortium, About the World Wide Web Consortium (W3C).
- Marsha Walton, Web Reaches New Milestone: 100 Million Sites, CNN, Nov. 1, 2006.
- Network Solutions later merged with VeriSign. The new company currently uses the VeriSign name. Under its original agreement with the NSF, Network Solutions was also responsible for registering second-level domain names in the restricted .gov and .edu top-level domains.
- Internet Global Growth: Lessons for the Future, at 11.
- Report on Securing and Growing the Digital Economy, at 3.
- Internet World Stats (full-text).
- Christopher Vollmer, "Digital Darwinism" 4 (July 9, 2009) (full-text).
- Ryan Calo, Robotics and the Lessons of Cyberlaw, 103 Cal. L. Rev. (2015) (full-text).
- U.S. Department of Commerce, Internet Policy Task Force, Commercial Data Privacy and Innovation in the Internet Economy: A Dynamic Policy Framework (Dec. 16, 2010) (full-text).
- The Internet is also used for email, file transfers, and instant messaging, among other things.
- Customization happens explicitly when a user undertakes a search for particular kinds of information, but it can happen in a less overt manner because customized content can be delivered to a user based, for example, on his or her previous requests for information.
- Information Technology for Counterterrorism: Immediate Actions and Future Possibilities, at 16-17.
- Interactive Advertising Bureau, Economic Value of the Advertising-Supported Internet Ecosystem (June 10, 2009) (full-text).
See also[edit | edit source]
- [Internet backbone provider]]
- Internet connection
- Internet connectivity
- Internet Engineering Steering Group
- Internet Engineering Task Force
- Internet exchange point
- Internet filtering
- Internet gateway
- Internet governance
- Internet information location tool
- Internet location
- Internet marketing
- Internet meme
- Internet of Things
- Internet operator
- Internet Over Cable: Defining the Future in Terms of the Past
- Internet peering point
- Internet Policy Task Force
- Internet privacy
- Internet Protocol
- Internet Protocol cloud
- Internet protocol number
- Internet protocol suite
- Internet radio
- Internet Relay Chat
- Internet Research Task Force
- Internet retailer
- Internet security
- Internet security protocol
- Internet service
- Internet service provider
- Internet surveillance
- Internet taxes
- Internet Tax Freedom Act of 1998
- Internet tax moratorium
- Internet Tax Non-Discrimination Act of 2003
- Internet telephony
- Internet television
- Internet traffic management
- Internet user
- Internet video
- Internet voting
- Internet voting machine
- Internet voting system
- Internet-based TRS
- Internet-ready device
- Internetwork layer
External resources[edit | edit source]
- Barry Leitner et al., "Brief History of the Internet" (1997) (full-text).
- Vint Cerf, "A Brief History of the Inter
External resources[edit | edit source]
- Barry Leitner et al., "Brief History of the Internet" (1997) (full-text).
- Vint Cerf, "A Brief History of the Internet & Related Networks" (full-text).
- Rajiv C. Shah & Jay P. Kesan, "The Privatization of the Internet's Backbone Network" (2007) (full-text).
- Mitch Waldrop, "DARPA and the Internet Revolution" (full-text).