Monday, October 8, 2012

Net neutrality debate goes to the ITU WCIT

Published at: http://www.diplomacy.edu/blog/net-neutrality-debate-goes-itu-wcit

(September 2012)

‘The enemy of my enemy is my friend.’ This is one way to read ETNO’s (European Telecommunications Network Operators’ Association) most recent proposal to the ITU (International Telecommunication Union) to regulate the possibility of creating a multi-tiered Internet. Were not the telecom operators against more regulation, especially within a global framework such as the ITU? The stakes are higher now – mandating the ITU to allow exceptions to net neutrality may prevent national regulators imposing the opposite!

In his interview with CNET, Luigi Gambardella, chairman of ETNO’s executive board, explains clearly the idea behind the proposed principle of ‘sender-party-pays’ for Internet traffic. In a nutshell Gambardella explains: ‘...the operators are free to negotiate commercial agreements beyond best effort. These commercial agreements are based on the value of the information, not the bits.’

A multi-tiered Internet concept, net neutrality, and pricing issues

In order to make big content providers like Google, Facebook, or Hollywood pay the fee for using the telecom infrastructure to reach their customers (and earning fortunes), the operators propose introducing ‘the business tier’ of the Internet, i.e. special services with a quality of service beyond best effort which is today’s Internet standard. To maintain an open Internet, they argue, the business tier would run in parallel with the ‘economic tier’, i.e. the Internet as we know it, which would remain unchanged and based on the principle of best efforts.

Proposals on a multi-tier Internet have been at the heart of discussions on net neutrality for years. ETNO has been pushing this idea together with other major operators around the world. The business tier has also been proposed, in the form of ‘additional online services’, by Verizon and Google in their Legislative Framework Proposal for an Open Internet in 2010.

Opposition from the Internet communities has always been loud and clear. The Electronic Frontier Foundation (EFF), in its analysis of the Google-Verizon proposal, pointed that the proposed ‘additional online services ... could be the exception that swallows the nondiscrimination rule’.
Gambardella argues that the multitier Internet can bring more choice:

In the end, the customer will have more choice. It's like if you travel in economy. But why don't you also allow business class, a premium class, to differentiate the service? There is more choice. The customer decides what is better for him.

However, by having both business and economic tiers run through same network and share the same total available bandwidth, it is likely that the economic tier will be the one to suffer in the event of congestion. Or, to continue with Gambardella’s airplane analogy: there is a limited number of economic and business seats on a plane, and no business seats will be given to economy class if there is a greater demand for them compared to the demand for business seats; ergo, the economy class will suffer. Besides, the argument of ‘more choice’ is becoming vague in the digital age, as I argued in one of my previous blogs: can most users really make informed and meaningful choices and decide what’s best for them?

Not least, the Internet community is asking operators to be more creative and innovative in constructing new business models instead of finding simple ways to take pieces of the content providers’ revenue pie. In his excellent post on Carriage vs Content, Geoff Huston explains the decades-long dispute between carriers (telecom operators) and content providers on who owes money to whom: from ‘You owe us money!’  to ‘No, it’s you who owe us money’ and back again.

ETNO’s proposal, beyond asking for a multi-tiered Internet, also suggests a fundamental change in the Internet’s economic model through the principle of ‘sending party network pays’, explaining how the gigantic revenues of the content industry could be distributed to the operators as well. This new ‘you owe us money!’ outcry has provoked strong opposition from the leading Internet business in the USA,  including ETNO’s  fellow operator Verizon, and also the US government. A separate post is needed to discuss the economic aspects of the proposal, so I will not go into detail here.

To regulate not to regulate

Back to the even more interesting bit: ETNO’s proposal is put forward as an amendment to the International Telecommunication Regulations (ITR), the treaty that determines how international telecommunications services operate across borders. This ITU ‘mandate’ document is to be updated at the World Conference on International Telecommunications (WCIT) in Dubai this December for the first time since it was developed back in 1988.

The controversies around the forthcoming WCIT and the modifications of the ITR have been hovering around the global diplomatic and policy fora for several years, gaining intensity as we approach the Dubai meeting. The position varies from the most developed countries opposing the ITU extending its mandate over the Internet, to most of the global south which is searching for more influence on the regulation of the Internet under the ITU umbrella. The Internet businesses – especially the dominant US and European content providers and telecom operators – have almost unanimously been against greater regulation both at local and at global level; for them, giving more say to the ITU would mean the globalisation of the governance of their business models and markets.
Now, it is the European association of telecom operators proposing that the ITU extends its mandate to allow the operators to negotiate commercial agreements beyond best effort – i.e. to introduce a business tier to the Internet. In effect, as user communities would argue, ETNO proposes that the new global telecom regulations allow exceptions to the net neutrality principle.

This move, in fact, is quite logical and even smart: if the global treaty would allow a multi-tier Internet, national regulators would have much less space for maneuvering to protect the network neutrality principle. Due to pressure from Internet communities on their governments and line regulators to protect the open Internet,  a number of states have already incorporated the principle of no discrimination into their policy acts – including the USA, Norway, the Netherlands, and others (as I have reported earlier); for ETNO, this may be a means to an end of this trend.

Tweeting about the initiative, Gambardella (@lgambardella) confirms that through this proposal ETNO wants to avoid any further internet regulation. In his interview with CNET, he justifies ETNO’s approach to the ITU process by acknowledging the ITR as a high level principle, a kind of a global ‘constitution’, and explains the motives:

Because what could happen is that in one year’s time, or two year’s time, some member states would perhaps ask to introduce some new limitation on the Internet. So, basically, the paradox is that our proposal is to impede some member state to regulate further the Internet.

Needless to say, this proposal raised lots of opposition not only from the USA but also from the EU, which has clearly announced that its position for WCIT will be to oppose any attempt to extend ITU regulations to the routing of Internet traffic and Internet content. More importantly, ETNO’s fellow operators from the USA, such as Verizon and AT&T, have also raised strong concerns over this proposal – both because they are in principle against extending the ITU’s mandate over the Internet, and because they are against the new economic models suggested by ETNO which would likely hurt the dominant US businesses.

‘The enemy of my enemy is my friend.’ ETNO has decided to fight against national/regional regulation (incarnated in European governments and regulators) through the enemy’s enemy – the global regulation (incarnated in the ITU); this is a U-turn in the complex multistakeholder relations within the global ICT policy process. With its myriad of influential actors, contemporary diplomacy is becoming increasingly complex and unpredictable, isn’t it?

Hey, Govs - leave those ISPs alone!

Published at:

(March 2012)


Part I : What is wrong with governments forcing liability on Internet intermediaries?

Does the good old ‘don’t shoot the messenger’ still apply in a digital world? Even if, in today’s economy, the messenger earns quite a lot of money from his work, does it mean he should now be liable for the content of the messages he delivers? Or should states instead think of promoting legal content by changing some old regulatory models and outdated ways of doing business?

Meet the messengers

Let’s start with identifying some of the messengers in the digital world, often referred to as Internet intermediaries: the Internet service providers (ISPs) and the content providers.

Internet service providers (ISPs)

On their way through the vast network of networks, digital packages are being routed by the ISPs, which guide them from source to destination through the fastest and most efficient route. ISPs do not check the packages for inappropriate content; they simply deliver the packages to us. ISPs range from local small and medium enterprises (SMEs) to giant international telecoms such as Telefonica, AT&T, and Verizon.  Let’s face it: they (can) earn a lot! Many of them are among the most solvent business entities in today’s global economy, even in the recessionary times.

Content providers

Do you even remember the last time you typed a full Internet address (URL) into your browser? No? I didn’t think so. Instead, we Google the address we need, follow the link through Facebook or Twitter, or visit social bookmarking webs. We seldom share files via e-mail any more. Instead we use DropBox, Google docs, MegaUpload (RIP and resurrected). These content providers are also digital messengers: they find the information for us, help us share it widely and access it easily. They do not check the information we access for inappropriate content; they simply deliver the existing information to us. Let’s face it again: with billions of searches or posts per day, they certainly earn lots and lots of money through advertising. They represent the emerging economic giants of today.

Should we shoot the messengers?

To observe best approaches for fighting illegal content, we should look through both types of lenses: shortsighted and farsighted.

The simplistic (shortsighted) view: Yes

Internet intermediaries are at the centre of digital content distribution. Technically (though theoretically) they can check each and every piece of digital information passing through their servers and filter out the inappropriate parts of the web…if they so wish, or if they are obliged to do so.  This work, through costly (technology, knowledge, and manpower), would be done by the intermediaries themselves, and would save governments from having to invest in educating and equipping juridical institutions to join the digital reality.

Since the intermediaries will not be delighted at investing in inspecting and filtering our data (which would also reduce the traffic flow and thereby reduce their profits) or disclosing private information belonging to their customers to third parties (at least not without financial return), the regulators might need to force them to do so. And this is where SOPA/PIPA/ACTA and other abbreviated legislations come in, naming the intermediaries liable for what they transport, and asking them to self-censor and reveal their users. It seems a short, sweet and practical solution: digital content will be controlled and the intellectual property rights (IPR) industry will be protected.

The holistic (farsighted) view: No

Ever heard of the term Web2.0? Content on the Net is not exclusively produced or shared by web admins and the quality content industry (like Hollywood) anymore; instead, it is mostly created and shared by users themselves – some 2 billion of them at the moment. And there is so much more than the illegal content: Wikipedia’s collaborative information, YouTube’s creative artistic and educational videos, near-realtime Twitter news and updates, Google-stored scientific papers and books, endless small websites and services with local content of cultures across the globe...

Yes, there is also inappropriate and even illegal content. But are the intermediaries the right ones to judge what is appropriate or illegal and what is not? Should Telefonica or Google decide if certain bits of content are parts of counterfeited products and whether this content is being used for private or commercial gain? Should they decide if some websites, like MegaUpload for instance, are involved with large-scale illegal activity? Are they competent to do so? Are they able to?

Some statistics will help: 60 hours of video are uploaded every minute on YouTube (link), Twitter user send an average of 140 million tweets per day (link), and over 550 million websites exist with more than 300 million added in 2011 alone (link). Even if intermediaries would be obliged, what would it take for them – in terms of equipment and manhours (lawyers and engineers) – to regularly check through all this content? Mission impossible – even for ius congens type of content (child porn, justification of genocide or terrorism) or politically or culturally sensitive content (porn, gambling or Nazi materials), let alone for IPR with specific sensitive legal aspects.

Yet, if forced to and marked as liable by governments, in order to avoid severe financial penalties intermediaries may turn to the simplest (or only possible) solution: severe self-censorship based on even the slightest insinuation of a what-could be inappropriate content according to our own internal blurry criteria. In practice this could mean that:
  • Twitter and FaceBook – since they would not be able to follow all the posts and shared links – might start censoring posts based on keywords or web addresses blacklisted based on unknown internal criteria.
  • Wikipedia – since it would not be able to check through all its articles and links – might need to remove all the articles entered collaboratively by users where there is even a possibility of a quoted source which does not respect the author’s rights.
  • PayPal would cease providing services to Internet companies in which they are not 100% confident (remember the case of Wikileaks?).
  • Google might filter out potentially questionable search results again based on internal blacklists.
  • ISPs might introduce filtering of entire web spaces (DNS filtering) such as YouTube or Twitter, since they cannot guarantee the content shared there.
This excessive self-censorship would result in loads of valuable Internet content becoming inaccessible. Moreover, potential new revolutionary services – such as FaceBook or Twitter in their early days – would not have the chance to develop in such a restrictive environment (consider the impact on developing countries where such services are more and more likely to emerge and impact the local and regional economic development).  The Internet would cease being a rich, open and economically viable space that encourages innovation and rewards potential.


Part II: What is the way to Internet regulation?

"In managing, promoting and protecting the Internet presence in our lives, we need to be no less creative than those who invented it." Kofi Annan, UN Secretary General, 2004

So if we don’t shoot the messengers, then what?

What is the key to the Internet regulation? Innovation!

Remember Kofi Annan’s words: in managing the Internet we need to be creative! The world as we knew it has changed due to the digital revolution; markets and societal relations have changed – and so should regulatory approaches and outdated business concepts.

New regulatory/governance approaches

Governments and regulators need to understand how the Internet works and what its major driving concepts are – openness, diversity, and inclusive governance (read a message to US Congress). If we are to preserve the potential of the Internet for development, there is no easy way to deal with its challenges (including that of illegal content). Instead, there is a need for lots of innovation in regulatory approaches, based on informed and inclusive policy discussions.

The major change is introducing open and inclusive policy-shaping processes. The reason is simple: governments don’t drive the progress of the Internet – business and user communities do. Governments are late-comers to the digital world, and they often don’t understand the basic principles of this complex and fast-changing environment that they wish to regulate (or at least to curb somewhat). This is in no way to say that someone else should become the decision-maker; instead, this is to underline that rigid regulations might not be needed at all, while those policies that might be needed should be brought about based on broad consultation with all stakeholders. In such a process, all interested parties would make sure their interests are counted:
  • User communities would ensure that human rights (such as privacy, access to and sharing of knowledge, freedom of speech, rights of people with disabilities, etc) are being protected.
  • Content providers would ensure that openness is preserved, and thereby innovative new services may emerge.
  • ISPs would ensure that the Net remains resilient with space for further innovations and investments.
  • The quality content and IPR industry would ensure that a model is found to protect IPR.
  • Governments would ensure that national interests, rules of law, and security are respected.
  • Politicians would ensure that no potential support from any industry would be lost – both traditional industry like quality content and telecoms, and emerging industry like content providers.
In an inclusive policy-shaping process, innovative approaches will emerge easily as an alternative to rigid conventional regulations: cooperative agreements and soft laws (such as on content management); geo-location techniques, if there is a need; clear yet not too restrictive legal grounds and roles of juridical institutions in content policies, instead of liability of intermediaries; public-private partnerships such as for infrastructure improvements; new services for e-government, e-literacy, or e-business; capacity building for judges, parliamentarians, and state officials, etc. Not least, only Internet policies owned by all stakeholders would eventually be implemented.

New business concepts

Instead of throttling the innovative work of Internet intermediaries – and thereby stalling their further economic investments – by using excessive rigid regulatory approaches for the sake of protecting outdated business concepts, governments should create a policy environment in which new business models are encouraged. For instance, the openness of the Internet has enabled content providers such as Google, Facebook, or even MegaUpload to emerge and to create a business model in which they earn from advertisement rather than from user subscriptions. The liberalised telecom markets have made telecoms and ISPs be more innovative with their business models: besides constantly experimenting with all you can eat (flat rate), pay for what you eat, and data-caps subscription models, they are also thinking aloud about how to take parts of the ‘big cake’ earned by content providers (follow the network neutrality debate).

An important example is related to the area of intellectual property rights. Apple’s iTunes has created a revolution in sharing digital content based on micro-payments, while respecting authors’ rights. At the same time, however, the powerful quality content industry (like Hollywood) is pushing the regulators to protect its outdated business models. Governments should instead stimulate the quality content industry (for instance through enabling a policy environment for reliable e-payments) to innovate its own business models and adapt to the digital world.

Keeping the messenger alive requires an understanding of the complex multidisciplinary area of Internet governance, but it saves the major emerging economies (ISPs and content providers) – which are quickly overtaking the giants of old (e.g. Hollywood) in terms of financial importance to political establishments. In the long run it also preserves the openness of the Internet and thereby creates space for further development of societal and business innovation and investment. Isn’t that what we all want? Well…perhaps not all of us.



Can free choice hurt open Internet markets?

Published at:

(January 2012)


Regulation by states, or self-regulation by companies themselves?

There is almost no field of Internet governance that does not involve this debate. The well-known example in the telecom market is, of course, service costs: business opts for an open market that would empower users’ choice to force providers to adjust the costs; regulators are there to take care that this really works, and to intervene in case of market-dominant providers.

Another example is with Network neutrality. Telecom operators argue that introducing economic-driven traffic management (throttling the content that brings fewer revenues, for the benefit of higher speeds to more profitable content) is an option that might suit some users, and that users’ choice would impact the market offers and make providers follow the users’ needs. Some regulators and many civil society groups think that regulation should be in place to protect equal treatment of all the traffic (except for technical reasons like traffic congestion or latency). A good overview of various positions brought by Vint Cerf, Google,  AT&T, Cisco, Verizon, ISOC, and other telcos and regulators, is available within the transcript of the session on Net neutrality that took place at the global IGF in Lithuania in 2010.

Not least, a similar battle exists in the field of privacy and data protection. Companies – especially multinational ones – stand strongly for more general yet more globally synchronised privacy and data protection policies. Besides easing their internal organisation and business offers on various continents, such policies would allow companies to ‘offer different levels of privacy and control for users’ which would give ‘the necessary flexibility without stifling innovation’, as noted by Telefonica in one of its excellent blog posts. Some governments, and again most civil society organisations, would never agree to leave privacy policies to business: referring to privacy as one of the basic human rights, they strongly request clear and detailed regulations on online privacy and data protection.

Free choice can really encourage competition and improve open Internet markets. But how capable are we – the users – of making good and informed choices?

Let’s remember here the famous question in this field: how many of us have ever read the privacy policies when creating new accounts on Gmail, Facebook or Tumblr? Few hands, if any, would be raised in an overcrowded conference room.  Even the lawyers among them would say they need more than concentration to understand what these policies say. Our choice there is free, though limited: we can accept the policy and its consequences, or not use the service; but this is certainly not an informed choice, is it?

Transparency is an oft-used word in these debates, and all agree it is a must: not only as a request for business to make all the information about the service accessible to users, but also to make it available in a comprehensive, short, and easy way.

But let’s imagine that all the information on a particular service is readily available: technical performances, detailed cost plans, traffic management procedures, quality of service details, privacy options, security levels, and much more.

Let's face it: making a choice is not easy anymore!

Simple things like buying a new cell phone have become increasingly complex: we can easily compare the performance of number of models with a single mouse click, but deciding on the right one requires time to browse through all the options and preselect a few to compare; it requires time to analyse the detailed comparisons of all the features; and it requires knowledge to understand what various in-built technologies or options stand for. I bought my Amazon Kindle almost two years after I thought about it for the first time: I kept following the emerging readers with ever new options and in-built technologies, never sure if every next one might be just better than what I had previously thought of buying...

Deciding on privacy or security of our data, or our ability to access a variety of online services, is a way more important and complicated task. For making an informed choice about each new service or gadget on the market, an average user would require:
  • clear awareness of his or her user (and human) rights in all formats (including in the online space);
  • lots of time to analyse their options and constantly adjust settings at each of the frequent service upgrades that bring more options (just remember Facebook upgrades);
  • a high level of (often legal-background) literacy; and
  • quite a level of understanding of technology, policy, and the economy.
While we can easily buy another phone if we make a poor choice, the consequences of a poor choice with issues like privacy or security of our data may be long-term and very disturbing.

So here is the paradox to explore: Will greater choice and transparency hurt users and markets?

Lots of choice in today’s dynamic technological environment may appear to be counterproductive. Users may start making rash, uninformed, irrational decisions (as some already do). This can lead to dissatisfaction, and then cyber-activism against some services and providers, even in favour of protection through tougher regulation (which already happens). Induced tough regulation can strike back against open markets and innovations.

Corporations often think of advancing self-regulation through more transparency and building end-user awareness. With 2 billion-and-growing Internet users today, a dizzily rapid evolution of technologies and services, and a lack of fundamental education for billions worldwide, achieving ubiquitous informed user choice is a difficult task – and if not a mission impossible, then certainly a mission that may be way too expensive and protracted.

Perhaps, with a closer look, corporations might see that accepting some light regulations rather than simply pushing free choice and self-regulation, could serve their interests better. Governments and regulators are becoming inevitably more involved with Internet policy. Yet, their capacities are very limited, both in terms of their basic understanding of Internet principles (working and management) and in terms of their capability to follow various policy processes. If corporations were to also focus on developing institutional capacity to enable governments to better understand why tough regulations may harm the development of the Internet and their economies – and if these same corporations made their policies light and easy and convenient for users to make informed choices – perhaps this would bring better results than forcing  the issue of market self-regulation based on free choice?

Friday, November 18, 2011

‘Operation Ghost Click’: Cyberzombies in the real world



‘The biggest cybercriminal takedown in history’, shouts the FBI! The network of 4 million ‘bots’ hijacked end-user computers controlled remotely by perpetrators – has been tracked down and dismantled (article). It is probably not the biggest ‘botnet’ that exists today (some information mention botnets of over 30 million computers), but certainly is the biggest discovered and tracked down to date. It took more than four years of international cooperation – US investigators and law enforcement with Estonian and Dutch colleagues and a number of partners from business and academia – to notice, analyse, collect evidence, and roll out a safe takedown that would not leave these 4 million computers without Internet at once.
Let us take a moment to comprehend the size of this ‘army’: 4 million zombies from over 100 countries! To compare: one of the mightiest armies the world has ever seen – that of Persian King Darius which debarked to Europe in Hellespont to fight Leonidas and his Spartans at the battle of Thermopylae in 480BC – had almost 2 million soldiers of over 40 nationalities. Barely half of this cyber-army!
Fortunately enough, this army of cyberzombies was not ordered to hunt for flesh and blood, but rather for money – quite a sum, though: USD$14 million profit. Each new bot – a computer hijacked via malware recklessly downloaded and installed by users themselves was then driven by several central servers to visit online advertisements raising the number of ‘clicks’ (i.e. visits) and bringing revenue to those controlling the botnet. It all resulted in fraud; a big fraud, but still only fraud. Should we be afraid about what such an army of computer (and computational) power could do if targeted at public utilities, such as electric grids, power plants, or military facilities?
We can take heart somewhat in knowing that international cooperation can help hunt down these virtual armies. And there is an interesting and comforting bit about that: no matter how virtual and how ‘untouchable’ cyberattackers may seem to us ‘ordinary users’, they are in fact just normal human beings, often working for legally registered companies, but hidden behind cyberspace. And once they are hunted down, they sit in same court rooms and lie down in same dumpy prison cells as any other real-world crime perpetrator. After all, it is humans that do harm, not technology. The experiences when real-world consequences follow cyber misdeeds help demystify cyberspace, which in turn possibly discourages some hackers while raising trust among end-users.
In spite of improvements in international cooperation when fighting cybercrime, it is not always easy to trace the real perpetrator behind such a complex structure as a botnet; also, the lack of harmonisation of national legislations causes jurisdiction dilemmas: who should prosecute the cyberperpetrators?
Take this case as example: the head of the action is a Russian businessman working with an Estonian company through the servers in Estonia, USA, and elsewhere, ending up seizing control over millions of computers from more than half of the countries of the world! So which of the jurisdiction principals (suggested by Jovan Kurbalija in An Introduction to Internet Governance p.87) should be used: territorial – based on what happens in a state’s territory; personality – based on where the perpetrator comes from; or the effects principle based on where the effects of the criminal act are felt? In this case it was possibly the principle of the power, where the USA asked Estonia for extraditions (though the Estonian authorities did not seem to object).
There has been a number of attempts to provide harmonisation and cooperation guidelines in form of international documents. One often referred to, including at the recent London Cyberspace Conference, is the Budapest Convention on Cybercrime of the Council of Europe from 2001 (integral text). While some think that it provides a balanced set of principles around which many countries can gather (and over 30 already have), others think that it needs improvements in order for more states to sign it (such as in Art.32b which touches on the sovereignty of states). Due to a growing concern over the security of critical infrastructure and public utilities, NATO’s cooperative cyber defence centre of excellence (CCD COE) started working on the manual of international law applicable to cyberwarfare (article), to be completed by the end of 2012. Will any of these documents really help? We know that the law always lags behind technology; yet we can’t abandon the law, but must rather enable it to make bigger steps in updates; wide international cooperation, capacity building, and knowledge and experience sharing certainly are the ways forward.
The law is not enough. As always, the humans are the weakest link – almost every cyberattack has users’ ignorance and negligence as a stepping stone. ‘Social engineering’ was a technique behind spreading the DNS changer malware that fuelled ‘Ghost Click’ attacks: the users, eager to watch only-they-know-what-kind-of movies, clicked ‘yes’ when prompted to install some additional video codec. Bang! In a matter of seconds, the virus changed their DNS settings and allowed remote control over their browsers and online acts, turning their computers into zombies. The truth is that this rootkit (also known as TDL4 and Aleureon) is among the world's most advanced pieces of malware, able to infect not only Windows but also Apple OS, and get around even the updated antivirus programmes (article). It does not, however, remove the responsibility of these millions of users for being careless.
At some point, in the case of a growing number of bots and more hazardous cyberattacks, the question may be raised if we, users, are also responsible for the safety of our computers (much like when driving our cars) and thus the Net? Prevent such a scenario! Behave responsibly online, and clean up after yourself – remember, computer hygiene ranks right up there with personal hygiene!


Tuesday, June 21, 2011

Pozitivna akcija kao mogući presedan ka kontroli sadržaja na Internetu

[Jun 2011]


Povodom vesti:

MUP i Telenor potpisali sporazum o bezbednosti na Internetu


Pozitivna akcija kao mogući presedan ka kontroli sadržaja na Internetu


Put do pakla popločan je dobrim namerama.

Partnerstvo MUP i Telenor u cilju sprečavanja pristupa sadržajima sa dečijom pornografijom na Internetu svakako je urađen u dobroj nameri obe strane. Zajednička inicijativa državnih institucija i privatnog sektora u domenu Interneta – pogotovo u oblastima u kojima nedostaje jasna regulativa (poput regulative sadržaja na Internetu) a oko kojih postoji neosporni najširi konsenzus (kao što je borba protiv dečije pornografije) – je za svaku pohvalu. Pa ipak, dobre namere ne donose uvek i efikasno rešenje, a ponekad vode i do pakla.

Blokiranje pristupa Internet sajtovima sa dečijom pornografijom kroz naše telekomunikacione mreže sprečiće u velikoj meri da naši korisnici (pogotovo omladina) slučajno nabasa na takav sadržaj. Ovakav pristup svakako neće onemogućiti bilo koga ko zaista želi da pristupi ovakvom sadržaju – koji najčešće nalazi u stranim zemljama, i to na udaljenim ostrvima Pacifika i drugde daleko od domašaja (čak i globalnih mehanizama) pravde – jer postoji niz servisa na Internetu kojima se može zaobići filter. Takođe, filtriranje neće pomoći identifikovanju i hvatanju prestupnika. Blokiranje i filtriranje sadržaja na Internetu je jedan od najlošijih pristupa regulativi: ovakav tehnološki pristup može biti samo jedna komponenta u cilju zaštite dece, ali su od nje mnogo važniji edukacija i širenje svesti dece, roditelja, nastavnika i društva, kao i odgovarajući pravni okvir.

Do kakvog pakla onda može da vodi ovako popločan put?

Uvođenje filtriranja sadržaja na Internetu u nedostatku pravnih okvira – makar i u sasvim opravdanu svrhu poput borbe protiv dečije pornografije – postavlja opasan presedan na osnovu koga se ubuduće mogu filtrirati drugi neželjeni sadržaji. Dok ćemo se svi saglasiti da je dečija pornografija nedopustiva, eventualno filtriranje politički, kulturološki (pa čak i pravno) „neprikladnog“ sadržaja izazvalo bi svojevrsnu cenzuru i ugrozilo slobodu pristupa informacijama i sadržaju na Internetu.

Svetska iskustva govore da se ova naizgled jasna granica između blokiranja sveopšte neprihvatljivog i „diskutabilno problematičnog“ sadržaja brzo se prelazi. Sledeći korak u primeni ovog presedana je blokiranje u kontekstu govora mržnje, kockanja, autorskih prava, falsifikovanih lekova i drugo. U manje uređenim zemljama sa krhkom demokratijom ovakvi presedani – nakon zakonskog kodifikovanja – bivaju lako zloupotrebljeni i u političke svrhe: virtuelni policajac iskače na monitorima u Kini prečesto (ideja za MUP značku na monitorima u Srbiji ima dosta vizuelne sličnosti).

Upravo o ovim pitanjima u Evropi i svetu plamte žestoke rasprave. Nedavni predlog o uspostavljanju „jedinstvenog bezbednog Evropskog sajber-prostora“ (neke vrste virtuelne Šengen zone) koji je procureo iz Evropske unije – a u kome se blokiranje sadržaja u svrhu sprečavanja zloupotrebe dece čak otvoreno navodi kao prvi korak u nizu – naišao je na oštre kritike evropskih udruženja za zaštitu prava korisnika na Internetu. Korišćenje blokade pristupa Internet sadržaju u svrhu zaštite autorskih prava pomenuto tokom G8, kao i u Francuskoj, Engleskoj i Americi takođe je naišlo na veliki otpor udruženja korisnika, pa čak i telekom operatora koji ne žele da budu odgovorni za odlučivanje o tome šta je neprihvatljiv sadržaj kao ni za implementaciju blokada i time ugrožavanje korisničkog pristupa informacijama i servisima na Internetu – pogotovo u nedostatku vrlo jasnih pravnih normi. Konačno, pan-evropski dijalog o upravljanju Internetom (EuroDIG) koji se održao pre nepunih mesec dana u Beogradu upravo je pokazao da ne postoji podrška filtriranju sadržaja na Internetu.

Vratimo se na Srbiju.

Uprkos dobroj nameri i pozitivnoj inicijativi MUP i Telenor, postavljena je ploča na putu ka paklu filtriranja sadržaja na Internetu u Srbiji. Zato hajde da, pred postavljanje svake od sledećih ploča u smeru regulative Interneta, prvo dobro porazmislimo i javno prodiskutujemo kuda će taj put da vodi, a da paralelno sa time što pre poradimo na jasnom regulatornom okviru koji bi onemogućio zloupotrebu presedana i kontrolu Interneta.

Network Neutrality in law – a step forwards or a step backwards?

[June 2011]
Published at: http://www.diplomacy.edu/blog/network-neutrality-law-%E2%80%93-step-forwards-or-step-backwards

‘Hurrah! The Netherlands has become the first European country to enshrine Net Neutrality in law.’

Many would share John Naughton’s joyous feeling expressed in his blog on 10 June. Many – but not everyone. In fact, a good number of those following the Net Neutrality debate would be cautious, if not adverse. Where do you stand?
On 9 June 2011, the Netherlands became the first country to encode the principle of Network Neutrality into national law, ensuring that telecoms and Internet service providers would place no restrictions on user access, or discriminate based on types of Internet content, services or applications. To some extent it does not come as surprise, bearing in mind that some of the Dutch telecom providers openly block access to Skype and similar VoIP and online messaging services over their networks, giving the advantage to their own voice services.

Similar breaches of Network Neutrality principles are made by the telcos in other countries as well. While entirely restricting the access to some online applications and services is a somewhat blunt way to protect your own interests, more sophisticated approaches include openly or tacitly throttling the bandwidth for some applications such as VoIP or peer-to-peer (based on the type of application which can be easily tracked by the ISP), or surcharging for these while not charging for others (like Facebook) in ‘bandwidth caps’ models.
Clearly, this annoys and worries the users; they request an open Internet – the unrestricted access to any content, application or service online. On the other hand, however, the telcos and ISPs look for business models that will ensure proper returns on their investments in infrastructure, and motivate them to invest further in order to deliver the service with a due quality, in spite of the fast-growing demands from new services for larger and larger bandwidths. Governments and regulators face the challenge to find the balance.
One of the major challenges regulators face is whether to act pre-emptively (ex-ante), in order to prevent possible breaches of the Net Neutrality principle, or to respond based on precedents (ex-post) once (and if) the breach occurs. Another challenge is whether the problem should be dealt with, with ‘hard law’ – encoding the principles into legislation – or if ‘soft law’ (guidelines and policies) would be sufficient.
Views on this are very divergent: telcos and ISPs commonly advocate existing telecom competition laws and soft ex-post anti-trust responses as sufficient to deal with Net Neutrality as well, while user communities and the software and content industry stand strongly for an ex-ante hard law approach for electronic communications, justifying that the competition is not sufficient to protect users’ interests. Governments and regulators play somewhere in between, based on the level of competition and the existing legal frameworks in their countries.
For instance, the USA copes with a lack of true telecom competition; there, the Federal Communications Commission (FCC) is in a years-long fight with the major telcos over its legitimacy to codify and enforce its Network Neutrality principles defined through its own policy acts into legally binding rules. Japan envisages possible congestion due to fast-growing demands for bandwidth because of new services; back in 2007 its Ministry of Internal Affairs and Communications worked on a comprehensive report on Net Neutrality, amending its policy programme with the principle of no discrimination. In the European Union, which has a solid competition and legal framework on telecommunications, the European Commission provided directives to national regulatory authorities to promote ‘the ability of end-users to access and distribute information or run applications and services of their choice’ within its amended Framework Directive in late 2009 (yet remains very cautious not to endanger the innovations and investments from business). The Declaration of the Committee of Ministers of the Council of Europe on Network Neutrality in late 2010 clearly supports the Net Neutrality principles, and calls member states and the private sector to further work on guidelines. None of these approaches, however, calls for either of the extreme poles, but rather for positions in between.
The most known and most accepted are the Guidelines by the Norwegian regulatory authority (NPT): a soft regulation based on the collaborative dialogue with the entire Internet industry and community. Voluntary but broadly supported, they provide a new ‘collaborative’ approach to Internet regulation; yet, ultimately, the regulators always preserve the option of transforming these into hard law – if necessary.
The Netherlands has chosen a pole: ex-ante with a hard law approach. Such a regulatory approach will certainly satisfy the users; the question is whether it will stifle further investment by the telcos. If it does, the Dutch might need to revert to a more balanced approach; if it does not, however, this model might outshine the Norwegian one and show that the telcos were crying about Net Neutrality debates for no reason. Let’s wait and watch closely.
For more on Network Neutrality, visit www.diplomacy.edu/ig/nn

State-driven hactivism

[April 2011]
Published at: http://www.diplomacy.edu/blog/state-driven-hactivism

Twitter followers these days could notice an intensive buzz about the recent Comodo case – a serious security breach within the system of trusted authorities for web certificates. The news is, however, not in ‘what’ or ‘how’, but rather in ‘who’ and ‘why’. The suspects: the governmental structures of Iran. The possible motive: eavesdropping on its citizens on global communication channels.
Technically speaking, what is this all about? When we type the web address of our bank or social network platform into a browser, our Internet service provider’s DNS (Domain Name System) server translates the alphanumeric domain name address (such as www.facebook.com) into a unique numeric IP address that computers and servers use to identify themselves (e.g. Facebook is 66.220.153.15), thus linking our computer with the server under this number. But, who can guarantee that the DNS will not adversely cheat us and link us to a bogus copy that has a homepage that looks exactly the same as Facebook’s? Such bogus websites can allow their owners to steal our usernames and passwords for social networks or online email accounts, but more seriously our credit card numbers and PINs to our bank accounts also.
Years ago, in order to make our browsing experience more reliable and secure – especially in cases of online payments or when accessing private areas – online businesses agreed with browser platform providers (Google, Firefox, etc) to introduce the concept of reliable digital certificates for websites: each public website can obtain a secured digital certificate that certifies to users that the requested web address is linked to only certain IP numbers and servers approved by the owners. Thereby, our browser would warn us of a bogus web page if our DNS linked us to any server other than that (or those) approved by the owner of the website we requested: for example, only a server with the IP address of 66.220.153.15 (and a number of others confirmed and hosted by Facebook) would be certified as the www.facebook.com server.
The two features of this system of digital certificates for websites make it very trustworthy:
a) Technical: digital certificates are based on the reliable SSL (Secure Sockets Layer) protocol that relies on public-key cryptography – one of the most reliable cryptographic methods.
b) Economic: the system of issuing SSL certificates for websites is a well-developed market with countless multinational companies involved as clients (Microsoft, Google, Skype, major banks and online payment systems, etc.), and several other big companies acting as trusted Certificate Authorities (CA) for certificates integrated with web browsers – such as VeriSign or Comodo – that look carefully over their procedures for ensuring the real identity of the owners of the certificates they issue and certify.
So the global uneasiness resulting from the recent incident with Comodo comes as no surprise.
Yet, following the golden principle of security – a chain is only as strong as its weakest link – the perpetrators managed to get into the system by compromising less secure user accounts with one of many affiliate registration authorities (RA) under Comodo’s trusted root CA. Pretending to be the corrupted RA, the perpetrators implemented a well-prepared, sophisticated action to register nine bogus certificates for famous websites such as those of Google, Skype and Yahoo! The operation, had it not been uncovered, would have resulted in our browsers not objecting to being linked to a bogus server for Google, Yahoo! or Skype –the IP numbers of those bogus servers would also be within the certificates issued by the trusted CAs. Wired magazine featured an interesting analysis of the case.
This news was alarming; the reactions of Comodo, Microsoft, Mozilla, and others were prompt. But – there was more.
To really (mis)use the potential of these ‘rogue certificates’ and attract many users to access the nine bogus sites believing they were accessing the original ones, a perpetrator would also need to take control of one or more DNS servers and make them cheat us. The DNS system is (still) way more vulnerable than the SSL, and temporarily hijacking the DNS servers is not ‘a big deal’; but to have the impact on a greater number of Internet users, one would need to hijack DNS servers higher up in the hierarchy – those of major national telecoms or beyond. Moreover, an effort to break the SSL system for such important websites would make sense only if the hijacking of part of the DNS system was perennial, not temporary; while hijacking can be only temporary – until uncovered and restored, longer-term control can be obtained only through physical or ‘political’ control over its management.
One more detail was noticeable from the Comodo report: ‘The perpetrator has focussed simply on the communication infrastructure (not the financial infrastructure as a typical cyber-criminal might)’ – the bogus certificates were requested for the following well-known websites: mail.google.com (GMail), login.live.com (Hotmail), www.google.com, login.yahoo.com, login.skype.com, addons.mozilla.org (Firefox extensions). The aim of the perpetrators was thus not to obtain financial benefit, but rather to endanger privacy – for personal, business, or possibly political benefit.
Lastly, Comodo experts claim they have traced the origin of this cyber-attack back to Tehran, Iran. Geo-localisation of the users (and attackers) according to their IP address is becoming more and more sophisticated; but so are the anonymisers that hide the IP address of the original sender – thus there is also a possibility that the attacker attempted to lay a false trail.
The reasons for believing that some governmental structures have implemented such a sophisticated well-planned cyber-attack to break into the communication identities and records of (some of) their citizens are found primarily in the fact that the platforms focused on were communication rather than financial ones, and in the suspicion that such an attack would need a strong, long-lasting second pillar in the form of the control of the (national?) DNS infrastructure. Tracing the attack back to Iran only gives a possible political context.
Concerns over the SSL or DNS vulnerabilities are not new, and will probably never really disappear but will periodically be replaced by slots of trust in new secure protocols and slots of mistrust due to the evolution of hactivism. The concern that the governments have become more aware of the growing importance of the Net is not brand new either. A growing concern is that the states now use skilful, sophisticated, ‘undercover’ hacking actions to achieve their national or international goals. The Comodo case adds to a number of recent examples, including the Stuxnet virus (industrial worm) allegedly produced by Israeli-US secret services to destroy Iranian nuclear facilities, or the case of a state-owned Chinese telecommunications firm that re-routed some 15% of world web traffic through its own servers for a short while.