A service of the

Download article as PDF

Not only has electronic communication revolutionised our way of interacting with each other with fundamental changes in private life and at work, it has also become a substan tial economic factor which bears a high potential for added value. Traditional communi cation markets are complemented by activities that originate from the Internet as a new communication platform. This has led to some tricky regulatory problems. The present issue of Intereconomics addresses two of them: a special section on “net neutrality” sum marises a debate on how to regulate (or not to regulate) capacity problems in the Internet; and two articles on the regulation of telecommunication markets discuss the issues of identifying competition problems on the one hand and of the appropriate regulatory ap proach on the other.

The term “net neutrality” refers to a heated discussion in the USA. Everyone who uses the Internet enjoys the uncomplicated way of accessing information, downloading content and communicating with the rest of the world. This has led to data traffi c of unimagined dimensions, resulting in the usual problems of congestion. As a consequence, network operators invest in additional network capacity. This investment should – ideally – be fi - nanced by those who use the pipes. However, in a “neutral” setting, every data transmis sion is considered equal and capacity problems are solved by queuing (or by algorithms that are undisclosed). This may cause slight delays in the transmission of information, which is not critical for most services (e.g. the most frequently used e-mail system), but can cause disruptions in more sensitive adoptions, such as voice over IP (VoIP). Ser vice providers thus usually promise to effect the transfer “at best effort”, thus hinting at a possible setting of priorities without detailing the procedure. Some rather new applica tions, such as Internet TV, online video gaming or music downloads require extremely high network capacities, and hence network providers argue that those who benefi t from these applications (consumers and service providers) should pay for the necessary quality upgrade of the network. Or, to put it differently, a price differentiation mechanism should be introduced which guarantees higher capacity for a higher price. Those users who are not willing to pay the higher price have to live with lower service quality. The fundamental differences in approaching the problem are manifest in the fact that what is labelled “net neutrality”, suggesting a “neutral” (implying “democratic” and “just”) treatment, on one side of the fence, is called “quality of service” or price differentiation on the other. The success story of the Internet and its resulting economic and social importance heat up the debate: fears that the unique universal access features of the Internet might be sacrifi ced to the profi t interests and power play of large network service providers on the one hand, and fears that there will not be suffi cient economic incentives to upgrade the network as required by ever more bit-intensive applications, are equally strong on both sides. This Special Issue presents the core economic and regulatory arguments of the debate.

The article by Rob Frieden gives an introduction to the roots of the problem and identi fi es “user centred” approaches versus “investment incentives” and “regulatory security” as the critical issue. Lack of transparency in ISPs’ dealing with the congestion problem might put consumers in the hands of carriers and operators and their power play; how ever, an end to the Internet boom due to the economically ineffi cient handling of scarce resources would seem to be equally detrimental to the interests of consumers. Frieden concludes that regulators should abstain from impeding justifi ed price and quality dis crimination and at the same time keep an eye on ISPs that create “false congestion” as an excuse to extract extra profi ts from unaware users.

Viktória Kocsis and Paul de Bijl identify prioritising and blocking as potentially uncom petitive behaviour in future Internet scenarios. They analyse the welfare implications and possible impacts on innovation and investment and present a theoretical model which captures the intricacies of the net neutrality debate. Various types of discrimination and their effects on static and dynamic effi ciency are analysed. They fi nd that in the case of port-blocking or deliberate quality degradation other regulatory measures are appropriate than in the case of access tiering. However, Kocsis and de Bijl also warn against the inten sive regulation of network access as it may discourage facilities-based competition and thus reduce incentives to invest in network capacity. Although facilities-based competition could be a remedy for many threats deriving from non-neutral networks, this may lead to an ineffi cient doubling of capacities.

Jörn Kruse chooses an approach which is guided by economic theory and assesses the impact of “neutrality regulation” as suggested in proposed legislation in the USA against a non-regulated network in which price differentiation in relation to quality is possible. In a competitive market, he sees little incentive for discrimination, as the discriminating actor would probably harm himself. Quality discrimination is seen as an advantage, because it would allow the prioritising of data fl ows of high economic value over those with little value. By categorising applications according to the data volumes involved and sensitivity to transmission quality he can show that sensible regulation can lead to optimal economic solutions with a minimal impact on service quality for those who choose to pay less.

In his contribution to the net neutrality debate Scott Marcus analyses differences be tween the USA and Europe. Whereas the debate has gained a lot of attention in the USA, where defenders of net neutrality urgently plead for regulation to save the Internet for pub lic use while “quality of service” advocates envisage the end of the Internet due to a lack of incentives for quality upgrades, in Europe only insiders have actually become aware of the problem. Marcus explains this with the higher intensity of competition in Europe, which considerably reduces the danger of misuse of power in an unregulated network.

What do we learn from the debate? Much depends on the regulatory function of com petition: in competitive markets risks of abuse, exploitation and unfair practices are rela tively low. In the long run, incentives to invest and to innovate suffer if economic rules are ignored, and much as we might dislike it, the absence of regulation does not mean that congestion is being resolved democratically. It seems impossible to differentiate between commercial and non-commercial applications. Thus, the differentiation can only be ef fected via quality criteria which result in higher prices for extremely high capacity trans missions and low prices for standard usage.

The paper by Johannes Bauer and Erik Bohlin reviews recent changes in US telecommu nications regulation and explores their relevance for other countries. From the late 1980s until 2003, the regulatory frameworks of the USA and other nations were based on similar principles. For the past few years, US telecommunications policy has pursued a much bolder pro-competitive course than regulators elsewhere. The USA is thus responding to the particular policy challenges posed by next-generation networks and services. A static concept in which the market shares of dominant players are taken as the main indicator for workable competition at the service level is contrasted with a dynamic approach where investment in competing infrastructures is supposed to establish sustainable competitive structures in the long run. This is an interesting view as currently the EU insists on a static regulatory approach by prioritising market entry at the service level, thus diminishing the potential for facilities-based competition.

The contestability of markets has become widely accepted as the criterion for a suf fi cient level of competition. Contestable markets do not warrant regulatory intervention. Wolfgang Briglauer and Kurt Reichinger challenge the appropriateness of this concept for telecommunication markets. Putting sunk costs at the focus of their discussion, they argue that the costs of market entry and exit vary considerably between incumbents and new entrants, and thus contestability alone cannot be used to fi nd the right balance be tween ex ante regulation and competition policy in telecommunications.

Download as PDF

DOI: 10.1007/s10272-008-0236-0