Net neutrality

Definition
Under a net neutrality (network neutrality) regime, individuals can access anyone or any content available on the Internet, can use any application they may choose and can innovate devices without the network provider's approval.

Overview
There is an extensive debate over what statutory and regulatory framework is most likely to foster innovation and investment both in physical broadband networks and in the applications that ride over those networks. Perhaps the most contentious element in that debate is whether competitive marketplace forces are sufficient to constrain the broadband network providers from restricting independent applications providers’ access to their networks in a fashion that would harm consumers and innovation. Or is government intervention needed in the form of what has been referred to as network neutrality, unfair competitive practices, or other nondiscrimination rules placed on the network providers?

There is no single accepted definition of "network neutrality" (also called "net neutrality"). However, most agree that any such definition should include the general principles that owners of the networks that compose and provide access to the Internet should not control how consumers lawfully use that network; and should not be able to discriminate against content provider access to that network. Typically, the term "net neutrality" is identified with positions that recommend at least some legal or regulatory restrictions on broadband Internet access services that include non-discrimination requirements above and beyond any that may be implied by existing antitrust laws or Federal Communications Commission (“FCC”) regulations.

Some policymakers contend that more specific regulatory guidelines may be necessary to protect the marketplace from potential abuses which could threaten the net neutrality concept. Others contend that existing laws and FCC policies are sufficient to deal with potential anti-competitive behavior and that such regulations would have negative effects on the expansion and future development of the Internet.

Changes in Telecommunications Market
This debate has been stimulated by some fundamental changes in the telecommunications market environment &mdash; several technology-driven, several market-driven, and one regulatory-driven.
 * Digital technology has reduced the costs for those firms that already have single-use (for example, voice or video) networks to upgrade their networks in order to offer multiple services over their single platform. The cost for these previously single-service providers to enter new service markets has been significantly reduced, inducing market convergence. Most notably, cable companies are upgrading their networks to offer voice and data services as well as video services, and telephone companies are upgrading their networks to offer video and data services as well as voice services.
 * Despite these lower entry costs, however, wireline broadband networks require huge sunk up-front fixed capital expenditures. This may limit the number of efficient broadband networks that can be deployed in any market to two (the cable provider and the wireline telephone company) unless a lower cost alternative becomes available using wireless or some other new technology. Although wireless technology may provide a third or even fourth alternative, it is not likely to be a ubiquitous option anytime soon. The commercial mobile wireless (cellphone), WiFi, and WiMAX technologies still require significant further technical developments before they will be able to provide comparable service and operate at the necessary scale. Moreover, spectrum is just being made available for these technologies, and in many cases parties currently using that spectrum must be moved to other spectrum.


 * The new broadband networks are able to deliver potentially highly valued services, such as voice over internet protocol (VoIP) and video over internet protocol (IP Video), that are qualitatively different than most of the services that have been provided over the internet in the past. Where services such as e-mail and website searches are not sensitive to “latency” &mdash; the amount of time it takes a packet of data to travel from source to destination &mdash; these new services are sensitive to delays in the delivery of packets of data due to congestion or other problems. Latency is affected by physical distance, the number of “hops” from one internet network to another internet network that must be made to deliver the packets (since there can be congestion at each hand-off point), and voice-to-data conversion. Congestion that delays the transmission of packets can cause several problems. In effect, the internet (or a proprietary IP network) is a set of routers connected by links. Packets of data get passed from one router to another, via links. A packet is forwarded from router to router, until it arrives at its destination. Typically, each router has several incoming links on which packets arrive, and several outgoing links on which it can send packets. When a packet shows up on an incoming link, the router will figure out on which outgoing link the packet should be forwarded. If that outgoing link is free, the packet can be sent out on it immediately. But if the outgoing link is busy transmitting another packet, the newly arrived packet will have to wait &mdash; it will be “buffered” in the router’s memory, waiting its turn until the outgoing link is free. Buffering lets the router deal with temporary surges in traffic. The router will be programmed to determine which packets should be delayed and also, when the link is available, which buffered packet should be transmitted. That is, a packet prioritization scheme is devised. This could be a simple, first-in, first-out scheme or a favor-applications-sensitive-to-packet-delay scheme, or a pay-for-priority scheme, or something else. But if packets keep showing up faster than they can be sent out on some outgoing link, the number of buffered packets will grow and grow, and eventually the router will run out of buffer memory. At that point, if one more packet shows up, the router has no choice but to discard a packet. It can discard the newly arriving packet, or it can make room for the new packet by discarding something else. But something has to be discarded. The router will be programmed to determine which packets should be dropped, thus creating a second packet prioritization scheme. Again, this could be a simple, first-in, first-out scheme or a favor-applications-sensitive-to-dropped-packets scheme, or a pay-for-priority scheme, or something else. Dropped packets can be retransmitted, but for those applications, such as voice, that require the packets to arrive and be reassembled within a short period of time, such packet recovery might not occur in the timely fashion needed to retain service quality. With such congestion, at least two problems may occur. One problem is dropped packets. Some applications are more sensitive than others to dropped packets. A second problem is “jitter” caused by the delay of certain packets. Internet traffic is usually “bursty,” with periods of relatively low activity punctuated by occasional bursts of packets. (For example, browsing the Web generates little or no traffic while reading the page, but a burst of traffic when the browser needs to fetch a new page.) Even if the router is programmed to minimize delay by only delaying low-priority packets when congestion absolutely requires such delay, if the high-priority traffic is bursty, then low-priority traffic will usually move through the network with little delay, but will experience noticeable delay whenever there is a burst of high-priority traffic. This on-again, off-again delay is called jitter. Jitter has no affect when downloading a big file, for which one’s concern is the average packet arrival rate rather than arrival time of a particular packet. But the quality of applications like voice conferencing or VoIP &mdash; which rely on steady streaming of interactive, real-time communication &mdash; can suffer a lot if there is jitter. As a result, the traditional internet “best effort” standard that does not guarantee that delays will not occur may be insufficient to meet customers’ service quality requirements for these new latency-sensitive services. More intensive network management may be needed to meet these quality of service (packet delivery) requirements.
 * Equipment is being deployed in broadband networks that can identify both the source of individual packets and the application to which individual packets are being put. With this equipment, network providers can give some packets higher priority than others, which can ensure that specific quality of service requirements are being met, but also could be abused to discriminate for or against particular applications or applications providers.
 * Some new applications place substantial bandwidth demands on the public internet and proprietary IP networks. For example, one industry analyst estimated that one particular application, BitTorrent software that uses file-sharing technology to download movies and other content, accounted for as much as 30% of all internet traffic at the end of 2004, and that peer-to-peer (P2P) applications, in general, represented 60% of internet traffic. BitTorrent has been used both for legitimate purposes and for the illegal downloading of copyrighted materials, but has now been accepted by some mainstream content providers. For example, Warner Brothers announced plans to make hundreds of movies and television shows available for purchase over the internet using BitTorrent software. Other major industry players, such as Microsoft and Sony, have introduced movie download services that use P2P technology.


 * Although the telephone and cable companies are deploying different network architectures, they are pursuing business plans and regulatory strategies with the same key elements:
 * They expect latency-sensitive video and voice services to be the “killer applications” that will generate the revenues needed to justify upgrade and buildout of their physical broadband networks.


 * To minimize customer churn and to gain an advantage over providers of single services, they market bundles of voice, data, and video services, with discounts that are greater the greater the number of services purchased. (It is expected by many that this “triple-play” bundle will be expanded to a "quadruple-play" bundle with the addition of mobile wireless service.)


 * The set of services the telephone and cable companies plan to offer over their networks, despite having interactive components, follow the model of the customer being primarily a recipient of information, not a transmitter of information. Therefore the broadband network architecture they all are deploying is asymmetric &mdash; with significantly greater bandwidth available from the broadband provider to the customer than in the reverse direction.


 * The video and voice services they offer, as well as other end-to-end services they plan to offer in the future, require quality of service assurances that they claim are not available on the “public internet,” but can be provided on their proprietary IP networks. In order to assure the quality of service of their own offerings, the broadband network providers all seek to manage bandwidth usage on their proprietary broadband networks by reserving a significant proportion of their network capacity for their own applications and by controlling the access that independent applications providers have to those networks through a variety of means, including charges for priority access.


 * The Federal Communications Commission (“FCC” or “Commission”) ruled in 2002 that cable modem service offered by cable companies, despite having a telecommunications component, is an information service and therefore not subject to the common carrier regulations imposed on telecommunications services in Title II of the Communications Act. The FCC decision was upheld by the U.S. Supreme Court in June 2005. Subsequently, the FCC ruled that DSL service offered by cable companies also is an information service. As a result, neither cable modem service nor DSL service is subject to the interconnection, nondiscrimination, and access requirements of Title II.

Position of Proponents
Proponents of network neutrality regulation include, among others, some content and applications providers, non-facilities-based ISPs, and various commentators. They generally argue that “non-neutral” practices will cause significant and wide-ranging harms and that the existing jurisdiction of the FCC, FTC, and DOJ, coupled with Congressional oversight, are insufficient to prevent or remedy those harms. Proponents suggest that, with deregulation of broadband services, providers of certain broadband Internet services have the legal ability, as well as economic incentives, to act as gatekeepers of content and applications on their networks.

Principally, these advocates express concern about the following issues:


 * (1) blockage, degradation, and prioritization of content and applications;


 * (2) vertical integration by ISPs and other network operators into content and applications, and the network operators favoring their own content, thereby placing unaffiliated content providers at a competitive disadvantage;


 * (3) effects on innovation at the “edges” of the network (that is, by content and applications providers);


 * (4) lack of competition in “last-mile” broadband Internet access markets;


 * (5) remaining legal and regulatory uncertainty in the area of Internet access; and


 * (6) the diminution of political and other expression on the Internet.

Some applications providers therefore have proposed enactment of statutory and regulatory requirements, such as nondiscriminatory access to broadband networks or network neutrality requirements. Others have been less confident about the ability to craft effective nondiscrimination or neutrality rules. They have suggested that government policy that promotes entry by broadband network providers that do not share the business plans of the cable and telephone companies might be a more effective way to foster innovation and investment in applications. This might include prohibiting restrictions on municipal deployment of broadband networks, expediting the availability of spectrum for wireless broadband networks, and limiting the amount of such spectrum that can be acquired by companies owned by or in other ways affiliated with the wireline broadband providers.

Not all proponents of net neutrality regulation oppose all forms of prioritization, however. For example, some believe that prioritization should be permitted if access to the priority service is open to all content and applications providers on equal terms; that is, without regard to the identity of the content or application provider.

Position of Opponents
Opponents of network neutrality regulation include, among others, some facilities-based wireline and wireless network operators and other commentators. They maintain that net neutrality regulation will impede investment in the facilities necessary to upgrade Internet access and may hamper technical innovation. They also argue that the sorts of blocking conduct described by net neutrality proponents are mainly hypothetical thus far and are unlikely to be widespread and thus are insufficient to justify a new, ex ante regulatory regime.

Principally, opponents of net neutrality regulation argue that:


 * (1) neutrality regulations would set in stone the status quo, precluding further technical and business-model innovation;


 * (2) effective network management practices require some data prioritization and may require certain content, applications, or attached devices to be blocked altogether;


 * (3) new content and applications are likely to require prioritization and other forms of network intelligence;


 * (4) allowing network operators to innovate freely and differentiate their networks permits competition that is likely to promote enhanced service offerings;


 * (5) prohibiting price differentiation would reduce incentives for network investment generally and may prevent pricing and service models more advantageous to marginal consumers;


 * (6) vertical integration by network operators into content and applications and certain bundling practices may benefit consumers; and


 * (7) there is insufficient evidence of either the likelihood or severity of potential harms to justify an entirely new regulatory regime, especially given that competition is robust and intensifying and the market generally is characterized by rapid technological change.

Opponents also note that the FCC has not requested further authority and has successfully used its existing authority, in a March 3, 2005, action. In that case, the FCC intervened and resolved, through a consent decree, an alleged case of port blocking by Madison River Communications, a local exchange (telephone) company. The full force of antitrust laws is also available, they claim, in cases of discriminatory behavior.

Other Arguments
To further complicate the debate, there is a growing economic literature on “two-sided” markets, in which a network provider has two distinct sets of customers to whom it provides service and sets terms, conditions, and rates for network access &mdash; end users, who seek access to the network to receive services, and applications services providers, who seek access to the network in order to reach those end users. According to that literature, while additional access networks will increase the competitive options available to end users, they may not improve the market of independent applications providers that do not have the option of choosing among access networks for the best deal, but rather must connect to all of the access networks in order to reach their customers.