Monday, October 8, 2012

Net neutrality debate goes to the ITU WCIT

Published at: http://www.diplomacy.edu/blog/net-neutrality-debate-goes-itu-wcit

(September 2012)

‘The enemy of my enemy is my friend.’ This is one way to read ETNO’s (European Telecommunications Network Operators’ Association) most recent proposal to the ITU (International Telecommunication Union) to regulate the possibility of creating a multi-tiered Internet. Were not the telecom operators against more regulation, especially within a global framework such as the ITU? The stakes are higher now – mandating the ITU to allow exceptions to net neutrality may prevent national regulators imposing the opposite!

In his interview with CNET, Luigi Gambardella, chairman of ETNO’s executive board, explains clearly the idea behind the proposed principle of ‘sender-party-pays’ for Internet traffic. In a nutshell Gambardella explains: ‘...the operators are free to negotiate commercial agreements beyond best effort. These commercial agreements are based on the value of the information, not the bits.’

A multi-tiered Internet concept, net neutrality, and pricing issues

In order to make big content providers like Google, Facebook, or Hollywood pay the fee for using the telecom infrastructure to reach their customers (and earning fortunes), the operators propose introducing ‘the business tier’ of the Internet, i.e. special services with a quality of service beyond best effort which is today’s Internet standard. To maintain an open Internet, they argue, the business tier would run in parallel with the ‘economic tier’, i.e. the Internet as we know it, which would remain unchanged and based on the principle of best efforts.

Proposals on a multi-tier Internet have been at the heart of discussions on net neutrality for years. ETNO has been pushing this idea together with other major operators around the world. The business tier has also been proposed, in the form of ‘additional online services’, by Verizon and Google in their Legislative Framework Proposal for an Open Internet in 2010.

Opposition from the Internet communities has always been loud and clear. The Electronic Frontier Foundation (EFF), in its analysis of the Google-Verizon proposal, pointed that the proposed ‘additional online services ... could be the exception that swallows the nondiscrimination rule’.
Gambardella argues that the multitier Internet can bring more choice:

In the end, the customer will have more choice. It's like if you travel in economy. But why don't you also allow business class, a premium class, to differentiate the service? There is more choice. The customer decides what is better for him.

However, by having both business and economic tiers run through same network and share the same total available bandwidth, it is likely that the economic tier will be the one to suffer in the event of congestion. Or, to continue with Gambardella’s airplane analogy: there is a limited number of economic and business seats on a plane, and no business seats will be given to economy class if there is a greater demand for them compared to the demand for business seats; ergo, the economy class will suffer. Besides, the argument of ‘more choice’ is becoming vague in the digital age, as I argued in one of my previous blogs: can most users really make informed and meaningful choices and decide what’s best for them?

Not least, the Internet community is asking operators to be more creative and innovative in constructing new business models instead of finding simple ways to take pieces of the content providers’ revenue pie. In his excellent post on Carriage vs Content, Geoff Huston explains the decades-long dispute between carriers (telecom operators) and content providers on who owes money to whom: from ‘You owe us money!’  to ‘No, it’s you who owe us money’ and back again.

ETNO’s proposal, beyond asking for a multi-tiered Internet, also suggests a fundamental change in the Internet’s economic model through the principle of ‘sending party network pays’, explaining how the gigantic revenues of the content industry could be distributed to the operators as well. This new ‘you owe us money!’ outcry has provoked strong opposition from the leading Internet business in the USA,  including ETNO’s  fellow operator Verizon, and also the US government. A separate post is needed to discuss the economic aspects of the proposal, so I will not go into detail here.

To regulate not to regulate

Back to the even more interesting bit: ETNO’s proposal is put forward as an amendment to the International Telecommunication Regulations (ITR), the treaty that determines how international telecommunications services operate across borders. This ITU ‘mandate’ document is to be updated at the World Conference on International Telecommunications (WCIT) in Dubai this December for the first time since it was developed back in 1988.

The controversies around the forthcoming WCIT and the modifications of the ITR have been hovering around the global diplomatic and policy fora for several years, gaining intensity as we approach the Dubai meeting. The position varies from the most developed countries opposing the ITU extending its mandate over the Internet, to most of the global south which is searching for more influence on the regulation of the Internet under the ITU umbrella. The Internet businesses – especially the dominant US and European content providers and telecom operators – have almost unanimously been against greater regulation both at local and at global level; for them, giving more say to the ITU would mean the globalisation of the governance of their business models and markets.
Now, it is the European association of telecom operators proposing that the ITU extends its mandate to allow the operators to negotiate commercial agreements beyond best effort – i.e. to introduce a business tier to the Internet. In effect, as user communities would argue, ETNO proposes that the new global telecom regulations allow exceptions to the net neutrality principle.

This move, in fact, is quite logical and even smart: if the global treaty would allow a multi-tier Internet, national regulators would have much less space for maneuvering to protect the network neutrality principle. Due to pressure from Internet communities on their governments and line regulators to protect the open Internet,  a number of states have already incorporated the principle of no discrimination into their policy acts – including the USA, Norway, the Netherlands, and others (as I have reported earlier); for ETNO, this may be a means to an end of this trend.

Tweeting about the initiative, Gambardella (@lgambardella) confirms that through this proposal ETNO wants to avoid any further internet regulation. In his interview with CNET, he justifies ETNO’s approach to the ITU process by acknowledging the ITR as a high level principle, a kind of a global ‘constitution’, and explains the motives:

Because what could happen is that in one year’s time, or two year’s time, some member states would perhaps ask to introduce some new limitation on the Internet. So, basically, the paradox is that our proposal is to impede some member state to regulate further the Internet.

Needless to say, this proposal raised lots of opposition not only from the USA but also from the EU, which has clearly announced that its position for WCIT will be to oppose any attempt to extend ITU regulations to the routing of Internet traffic and Internet content. More importantly, ETNO’s fellow operators from the USA, such as Verizon and AT&T, have also raised strong concerns over this proposal – both because they are in principle against extending the ITU’s mandate over the Internet, and because they are against the new economic models suggested by ETNO which would likely hurt the dominant US businesses.

‘The enemy of my enemy is my friend.’ ETNO has decided to fight against national/regional regulation (incarnated in European governments and regulators) through the enemy’s enemy – the global regulation (incarnated in the ITU); this is a U-turn in the complex multistakeholder relations within the global ICT policy process. With its myriad of influential actors, contemporary diplomacy is becoming increasingly complex and unpredictable, isn’t it?

Hey, Govs - leave those ISPs alone!

Published at:

(March 2012)


Part I : What is wrong with governments forcing liability on Internet intermediaries?

Does the good old ‘don’t shoot the messenger’ still apply in a digital world? Even if, in today’s economy, the messenger earns quite a lot of money from his work, does it mean he should now be liable for the content of the messages he delivers? Or should states instead think of promoting legal content by changing some old regulatory models and outdated ways of doing business?

Meet the messengers

Let’s start with identifying some of the messengers in the digital world, often referred to as Internet intermediaries: the Internet service providers (ISPs) and the content providers.

Internet service providers (ISPs)

On their way through the vast network of networks, digital packages are being routed by the ISPs, which guide them from source to destination through the fastest and most efficient route. ISPs do not check the packages for inappropriate content; they simply deliver the packages to us. ISPs range from local small and medium enterprises (SMEs) to giant international telecoms such as Telefonica, AT&T, and Verizon.  Let’s face it: they (can) earn a lot! Many of them are among the most solvent business entities in today’s global economy, even in the recessionary times.

Content providers

Do you even remember the last time you typed a full Internet address (URL) into your browser? No? I didn’t think so. Instead, we Google the address we need, follow the link through Facebook or Twitter, or visit social bookmarking webs. We seldom share files via e-mail any more. Instead we use DropBox, Google docs, MegaUpload (RIP and resurrected). These content providers are also digital messengers: they find the information for us, help us share it widely and access it easily. They do not check the information we access for inappropriate content; they simply deliver the existing information to us. Let’s face it again: with billions of searches or posts per day, they certainly earn lots and lots of money through advertising. They represent the emerging economic giants of today.

Should we shoot the messengers?

To observe best approaches for fighting illegal content, we should look through both types of lenses: shortsighted and farsighted.

The simplistic (shortsighted) view: Yes

Internet intermediaries are at the centre of digital content distribution. Technically (though theoretically) they can check each and every piece of digital information passing through their servers and filter out the inappropriate parts of the web…if they so wish, or if they are obliged to do so.  This work, through costly (technology, knowledge, and manpower), would be done by the intermediaries themselves, and would save governments from having to invest in educating and equipping juridical institutions to join the digital reality.

Since the intermediaries will not be delighted at investing in inspecting and filtering our data (which would also reduce the traffic flow and thereby reduce their profits) or disclosing private information belonging to their customers to third parties (at least not without financial return), the regulators might need to force them to do so. And this is where SOPA/PIPA/ACTA and other abbreviated legislations come in, naming the intermediaries liable for what they transport, and asking them to self-censor and reveal their users. It seems a short, sweet and practical solution: digital content will be controlled and the intellectual property rights (IPR) industry will be protected.

The holistic (farsighted) view: No

Ever heard of the term Web2.0? Content on the Net is not exclusively produced or shared by web admins and the quality content industry (like Hollywood) anymore; instead, it is mostly created and shared by users themselves – some 2 billion of them at the moment. And there is so much more than the illegal content: Wikipedia’s collaborative information, YouTube’s creative artistic and educational videos, near-realtime Twitter news and updates, Google-stored scientific papers and books, endless small websites and services with local content of cultures across the globe...

Yes, there is also inappropriate and even illegal content. But are the intermediaries the right ones to judge what is appropriate or illegal and what is not? Should Telefonica or Google decide if certain bits of content are parts of counterfeited products and whether this content is being used for private or commercial gain? Should they decide if some websites, like MegaUpload for instance, are involved with large-scale illegal activity? Are they competent to do so? Are they able to?

Some statistics will help: 60 hours of video are uploaded every minute on YouTube (link), Twitter user send an average of 140 million tweets per day (link), and over 550 million websites exist with more than 300 million added in 2011 alone (link). Even if intermediaries would be obliged, what would it take for them – in terms of equipment and manhours (lawyers and engineers) – to regularly check through all this content? Mission impossible – even for ius congens type of content (child porn, justification of genocide or terrorism) or politically or culturally sensitive content (porn, gambling or Nazi materials), let alone for IPR with specific sensitive legal aspects.

Yet, if forced to and marked as liable by governments, in order to avoid severe financial penalties intermediaries may turn to the simplest (or only possible) solution: severe self-censorship based on even the slightest insinuation of a what-could be inappropriate content according to our own internal blurry criteria. In practice this could mean that:
  • Twitter and FaceBook – since they would not be able to follow all the posts and shared links – might start censoring posts based on keywords or web addresses blacklisted based on unknown internal criteria.
  • Wikipedia – since it would not be able to check through all its articles and links – might need to remove all the articles entered collaboratively by users where there is even a possibility of a quoted source which does not respect the author’s rights.
  • PayPal would cease providing services to Internet companies in which they are not 100% confident (remember the case of Wikileaks?).
  • Google might filter out potentially questionable search results again based on internal blacklists.
  • ISPs might introduce filtering of entire web spaces (DNS filtering) such as YouTube or Twitter, since they cannot guarantee the content shared there.
This excessive self-censorship would result in loads of valuable Internet content becoming inaccessible. Moreover, potential new revolutionary services – such as FaceBook or Twitter in their early days – would not have the chance to develop in such a restrictive environment (consider the impact on developing countries where such services are more and more likely to emerge and impact the local and regional economic development).  The Internet would cease being a rich, open and economically viable space that encourages innovation and rewards potential.


Part II: What is the way to Internet regulation?

"In managing, promoting and protecting the Internet presence in our lives, we need to be no less creative than those who invented it." Kofi Annan, UN Secretary General, 2004

So if we don’t shoot the messengers, then what?

What is the key to the Internet regulation? Innovation!

Remember Kofi Annan’s words: in managing the Internet we need to be creative! The world as we knew it has changed due to the digital revolution; markets and societal relations have changed – and so should regulatory approaches and outdated business concepts.

New regulatory/governance approaches

Governments and regulators need to understand how the Internet works and what its major driving concepts are – openness, diversity, and inclusive governance (read a message to US Congress). If we are to preserve the potential of the Internet for development, there is no easy way to deal with its challenges (including that of illegal content). Instead, there is a need for lots of innovation in regulatory approaches, based on informed and inclusive policy discussions.

The major change is introducing open and inclusive policy-shaping processes. The reason is simple: governments don’t drive the progress of the Internet – business and user communities do. Governments are late-comers to the digital world, and they often don’t understand the basic principles of this complex and fast-changing environment that they wish to regulate (or at least to curb somewhat). This is in no way to say that someone else should become the decision-maker; instead, this is to underline that rigid regulations might not be needed at all, while those policies that might be needed should be brought about based on broad consultation with all stakeholders. In such a process, all interested parties would make sure their interests are counted:
  • User communities would ensure that human rights (such as privacy, access to and sharing of knowledge, freedom of speech, rights of people with disabilities, etc) are being protected.
  • Content providers would ensure that openness is preserved, and thereby innovative new services may emerge.
  • ISPs would ensure that the Net remains resilient with space for further innovations and investments.
  • The quality content and IPR industry would ensure that a model is found to protect IPR.
  • Governments would ensure that national interests, rules of law, and security are respected.
  • Politicians would ensure that no potential support from any industry would be lost – both traditional industry like quality content and telecoms, and emerging industry like content providers.
In an inclusive policy-shaping process, innovative approaches will emerge easily as an alternative to rigid conventional regulations: cooperative agreements and soft laws (such as on content management); geo-location techniques, if there is a need; clear yet not too restrictive legal grounds and roles of juridical institutions in content policies, instead of liability of intermediaries; public-private partnerships such as for infrastructure improvements; new services for e-government, e-literacy, or e-business; capacity building for judges, parliamentarians, and state officials, etc. Not least, only Internet policies owned by all stakeholders would eventually be implemented.

New business concepts

Instead of throttling the innovative work of Internet intermediaries – and thereby stalling their further economic investments – by using excessive rigid regulatory approaches for the sake of protecting outdated business concepts, governments should create a policy environment in which new business models are encouraged. For instance, the openness of the Internet has enabled content providers such as Google, Facebook, or even MegaUpload to emerge and to create a business model in which they earn from advertisement rather than from user subscriptions. The liberalised telecom markets have made telecoms and ISPs be more innovative with their business models: besides constantly experimenting with all you can eat (flat rate), pay for what you eat, and data-caps subscription models, they are also thinking aloud about how to take parts of the ‘big cake’ earned by content providers (follow the network neutrality debate).

An important example is related to the area of intellectual property rights. Apple’s iTunes has created a revolution in sharing digital content based on micro-payments, while respecting authors’ rights. At the same time, however, the powerful quality content industry (like Hollywood) is pushing the regulators to protect its outdated business models. Governments should instead stimulate the quality content industry (for instance through enabling a policy environment for reliable e-payments) to innovate its own business models and adapt to the digital world.

Keeping the messenger alive requires an understanding of the complex multidisciplinary area of Internet governance, but it saves the major emerging economies (ISPs and content providers) – which are quickly overtaking the giants of old (e.g. Hollywood) in terms of financial importance to political establishments. In the long run it also preserves the openness of the Internet and thereby creates space for further development of societal and business innovation and investment. Isn’t that what we all want? Well…perhaps not all of us.



Can free choice hurt open Internet markets?

Published at:

(January 2012)


Regulation by states, or self-regulation by companies themselves?

There is almost no field of Internet governance that does not involve this debate. The well-known example in the telecom market is, of course, service costs: business opts for an open market that would empower users’ choice to force providers to adjust the costs; regulators are there to take care that this really works, and to intervene in case of market-dominant providers.

Another example is with Network neutrality. Telecom operators argue that introducing economic-driven traffic management (throttling the content that brings fewer revenues, for the benefit of higher speeds to more profitable content) is an option that might suit some users, and that users’ choice would impact the market offers and make providers follow the users’ needs. Some regulators and many civil society groups think that regulation should be in place to protect equal treatment of all the traffic (except for technical reasons like traffic congestion or latency). A good overview of various positions brought by Vint Cerf, Google,  AT&T, Cisco, Verizon, ISOC, and other telcos and regulators, is available within the transcript of the session on Net neutrality that took place at the global IGF in Lithuania in 2010.

Not least, a similar battle exists in the field of privacy and data protection. Companies – especially multinational ones – stand strongly for more general yet more globally synchronised privacy and data protection policies. Besides easing their internal organisation and business offers on various continents, such policies would allow companies to ‘offer different levels of privacy and control for users’ which would give ‘the necessary flexibility without stifling innovation’, as noted by Telefonica in one of its excellent blog posts. Some governments, and again most civil society organisations, would never agree to leave privacy policies to business: referring to privacy as one of the basic human rights, they strongly request clear and detailed regulations on online privacy and data protection.

Free choice can really encourage competition and improve open Internet markets. But how capable are we – the users – of making good and informed choices?

Let’s remember here the famous question in this field: how many of us have ever read the privacy policies when creating new accounts on Gmail, Facebook or Tumblr? Few hands, if any, would be raised in an overcrowded conference room.  Even the lawyers among them would say they need more than concentration to understand what these policies say. Our choice there is free, though limited: we can accept the policy and its consequences, or not use the service; but this is certainly not an informed choice, is it?

Transparency is an oft-used word in these debates, and all agree it is a must: not only as a request for business to make all the information about the service accessible to users, but also to make it available in a comprehensive, short, and easy way.

But let’s imagine that all the information on a particular service is readily available: technical performances, detailed cost plans, traffic management procedures, quality of service details, privacy options, security levels, and much more.

Let's face it: making a choice is not easy anymore!

Simple things like buying a new cell phone have become increasingly complex: we can easily compare the performance of number of models with a single mouse click, but deciding on the right one requires time to browse through all the options and preselect a few to compare; it requires time to analyse the detailed comparisons of all the features; and it requires knowledge to understand what various in-built technologies or options stand for. I bought my Amazon Kindle almost two years after I thought about it for the first time: I kept following the emerging readers with ever new options and in-built technologies, never sure if every next one might be just better than what I had previously thought of buying...

Deciding on privacy or security of our data, or our ability to access a variety of online services, is a way more important and complicated task. For making an informed choice about each new service or gadget on the market, an average user would require:
  • clear awareness of his or her user (and human) rights in all formats (including in the online space);
  • lots of time to analyse their options and constantly adjust settings at each of the frequent service upgrades that bring more options (just remember Facebook upgrades);
  • a high level of (often legal-background) literacy; and
  • quite a level of understanding of technology, policy, and the economy.
While we can easily buy another phone if we make a poor choice, the consequences of a poor choice with issues like privacy or security of our data may be long-term and very disturbing.

So here is the paradox to explore: Will greater choice and transparency hurt users and markets?

Lots of choice in today’s dynamic technological environment may appear to be counterproductive. Users may start making rash, uninformed, irrational decisions (as some already do). This can lead to dissatisfaction, and then cyber-activism against some services and providers, even in favour of protection through tougher regulation (which already happens). Induced tough regulation can strike back against open markets and innovations.

Corporations often think of advancing self-regulation through more transparency and building end-user awareness. With 2 billion-and-growing Internet users today, a dizzily rapid evolution of technologies and services, and a lack of fundamental education for billions worldwide, achieving ubiquitous informed user choice is a difficult task – and if not a mission impossible, then certainly a mission that may be way too expensive and protracted.

Perhaps, with a closer look, corporations might see that accepting some light regulations rather than simply pushing free choice and self-regulation, could serve their interests better. Governments and regulators are becoming inevitably more involved with Internet policy. Yet, their capacities are very limited, both in terms of their basic understanding of Internet principles (working and management) and in terms of their capability to follow various policy processes. If corporations were to also focus on developing institutional capacity to enable governments to better understand why tough regulations may harm the development of the Internet and their economies – and if these same corporations made their policies light and easy and convenient for users to make informed choices – perhaps this would bring better results than forcing  the issue of market self-regulation based on free choice?