Information Law & Policy Centre Blog
20 Nov 2017, 17:30 to 20 Nov 2017, 19:30
Institute of Advanced Legal Studies, 17 Russell Square, London WC1B 5DR
As part of the University of London’s Being Human Festival, the Information Law and Policy Centre will be hosting a film and discussion panel evening at the Institute of Advanced Legal Studies.
One of the Centre’s key aims is to promote public engagement by bringing together academic experts, policy-makers, industry, artists, and key civil society stakeholders (such as NGOs, journalists) to discuss issues and ideas concerning information law and policy relevant to the public interest that will capture the public’s imagination.
This event will focus on the implications posed by the increasingly significant role of artificial intelligence (AI) in society and the possible ways in which humans will co-exist with AI in future, particularly the impact that this interaction will have on our liberty, privacy, and agency. Will the benefits of AI only be achieved at the expense of these human rights and values? Do current laws, ethics, or technologies offer any guidance with respect to how we should navigate this future society?
The primary purpose of this event is to particularly encourage engagement and interest from young adults (15-18 years) in considering the implications for democracy, civil liberties, and human rights posed by the increasing role of AI in society that affect their everyday decision-making as humans and citizens. A limited number of places for this event will also be available to the general public.
Confirmed speakers include:
Chair: Dr Nora Ni Loideain, Director and Lecturer in Law, Information Law and Policy Centre, University of London
- Dr Hamed Haddadi, Associate Professor at the Faculty of Engineering, Imperial College London and lead researcher of The Human-Data Interaction Project
- Hal Hodson, Technology Journalist at The Economist
- Professor John Naughton, Project Leader of the Technology and Democracy Project, University of Cambridge and columnist for The Observer
- Renate Samson, Chief Executive of leading human rights organisation Big Brother Watch
BOOKING: This event is free but advance booking is required.Book now
Readers of the Information and Law Policy Centre blog are invited to submit a call for papers for the Global Fake News and Defamation Symposium on the theme of ‘Fake News and Weaponized Defamation: Global Perspectives’Concept Note:
The notion of “fake news” has gained great currency in global popular culture in the wake of contentious social-media imbued elections in the United States and Europe. Although often associated with the rise of extremist voices in political discourse and, specifically, an agenda to “deconstruct” the power of government, institutional media, and the scientific establishment, fake news is “new wine in old bottles,” a phenomenon that has long historical roots in government propaganda, jingoistic newspapers, and business-controlled public relations. In some countries, dissemination of “fake news” is a crime that is used to stifle dissent. This broad conception of fake news not only acts to repress evidence-based inquiry of government, scientists, and the press; but it also diminishes the power of populations to seek informed consensus on policies such as climate change, healthcare, race and gender equality, religious tolerance, national security, drug abuse, poverty, homophobia, and government corruption, among others.
“Weaponized defamation” refers to the increasing invocation, and increasing use, of defamation and privacy torts by people in power to threaten press investigations, despite laws protecting responsible or non-reckless reporting. In the United States, for example, some politicians, including the current president, invoke defamation as both a sword and shield. Armed with legal power that individuals—and most news organizations—cannot match, politicians and celebrities, wealthy or backed by the wealth of others, can threaten press watchdogs with resource-sapping litigation; at the same time, some leaders appear to leverage their “lawyered-up” legal teams to make knowingly false attacks—or recklessly repeat the false attacks of others—with impunity.
Papers should have an international or comparative focus that engages historical, contemporary or emerging issues relating to fake news or “weaponized defamation.” All papers submitted will be fully refereed by a minimum of two specialized referees. Before final acceptance, all referee comments must be considered.
- Accepted papers will be peer reviewed and distributed during the conference to all attendees.
- Authors are given an opportunity to briefly present their papers at the conference.
- Accepted papers will be published in the Journal of International Media and Entertainment Law, the Southwestern Law Review, or the Southwestern Journal of International Law.
- Authors whose papers are accepted for publication will be provided with round-trip domestic or international travel (subject to caps) to Los Angeles, California, hotel accommodations, and complimentary conference registration.
Completed paper deadline: January 5, 2018 Submit an Abstract
The Journal of International Media & Entertainment Law is a faculty-edited journal published by the Donald E. Biederman Entertainment and Media Law Institute at Southwestern Law School, in cooperation with the American Bar Association’s Forum on Communications Law, and the ABA’s Forum on the Entertainment and Sports Industries.
In this guest post, Harry T Dyer, University of East Anglia, looks into the complicated relationship between social media and young people. His article is relevant to the Information Law and Policy Centre’s annual conference coming up in November – Children and Digital Rights: Regulating Freedoms and Safeguards.
Facebook’s latest attempt to appeal to teens has quietly closed its doors. The social media platform’s Lifestage app (so unsuccessful that this is probably the first time you’ve heard of it) was launched a little under a year ago to resounding apathy and has struggled ever since.
Yet, as is Silicon Valley’s way, Facebook has rapidly followed the failure of one venture with the launch of another one by unveiling a new video streaming service. Facebook Watch will host series of live and pre-recorded short-form videos, including some original, professionally made content, in a move that will allow the platform to more directly compete with the likes of YouTube, Netflix and traditional TV channels.
Lifestage was just one of a long series of attempts by Facebook to stem the tide of young people increasingly interacting across multiple platforms. With Watch, the company seems to have changed tack from this focus on retaining young people, instead targeting a much wider user base. Perhaps Facebook has learnt that it will simply never be cool –, but that doesn’t mean it still can’t be popular.
Lifestage was intended to compete with the increasingly popular Snapchat, the photo and video-sharing app especially popular among teenagers. But the spin-off was never able to achieve the user numbers necessary to sustain the venture. Worryingly for Facebook, this is the third failed attempt to emulate Snapchat’s success among teens, following the short-lived Facebook Poke and Facebook Slingshot, which also came to quiet and unceremonious ends. Facebook has also incorporated several of Snapchat’s features such as its Stories function directly into its main app, to a lukewarm reception.
This comes as the social media market continues to expand rapidly. Competition is fierce and numerous established companies are vying with start-ups and rising brands to catch the attention of a growing and increasingly connected user base.
No longer do one or two companies hold a monopoly on the social media landscape. Most teenagers are increasingly using more than one platform for their online interactions (though noticeably this trend does appear to be somewhat different outside the Western world). Young people are experimenting with new formats and ways of interacting, from short videos and disappearing messages, to anonymous feedback apps such as Sarahah, the latest craze to explode in popularity and excite media commentators.I don’t want my mum to see this.
Yet despite these issues, Facebook is still the world’s most popular social media platform by quite some distance and has more than 2 billion users worldwide. Recent data suggests it is almost as popular as Snapchat among teens and young users, as is Facebook’s other photo-sharing app, Instagram.
The problem, of course, is that Facebook’s popularity – and, crucially, the platform’s simplistic and user-friendly design – means teenagers’ parents, teachers, bosses and even grandparents now also use the platform. For teens, that means the platform has become a headache of competing and conflicting social obligations, with various aspects and contexts of their life collapsing into a single space.
The young people I talk to for my research suggest that Facebook’s broad appeal and easy design presents a unique experience for them. Facebook is a field of potential social landmines, with the fear that the diverse user base will see everything they post – causing anxiety, hedging and inaction.
Having to negotiate this broad audience means young people seem to be less likely to use of some of the public aspects of Facebook, choosing instead to rely on aspects such as groups and private messaging. This explains why they seem to be increasingly relying more on platforms such as Instagram and Snapchat to interact with their peers, a trend also noted by other researchers.
In this light, the attempt to encourage teenagers to use the same features as they do on Snapchat when Facebook’s brand is so associated with a more public and socially difficult environment seems inherently flawed. We can’t say where the company will go in the future but it seems likely it will struggle to ever be as central to young people’s online social experiences as it once was.Watch targets a wider audience
Yet the launch of Facebook Watch suggests perhaps the company has learnt its lesson. The new service is an attempt to create a broader space that can appeal to their wide user base, rather than aiming content, ideas and spaces specifically at teens and young people.
With the announcement of the video-sharing service, Facebook has put out a call for “community orientated” original shows. It will provide users with video recommendations based on what others – and in particular their friends – are watching. In this way, Facebook Watch will allow users to find content that reflects their interests and friendships, whoever they are. Rather than attempting to retain and target a specific demographic, Facebook Watch appears to be acknowledging the platform’s broader appeal.
This seems to match Facebook’s moves away from being a pure social networking platform and towards a much broader one-stop hub for news and content. With the launch of Watch, users need never leave the walled garden of Facebook as they can view both content embedded from around the web and original videos hosted on the site. And with Facebook already ranked second only to YouTube for online video content, again this move looks like an attempt to cater to a much broader market than teens alone.
The fact that Facebook seems increasingly keen to nurture its more diverse user base is likely to be a continuing concern for young people worried about their interactions on the platform. But, on the other hand, given YouTube’s massive appeal to the teen market, Watch may serve as a way to entice teens back to Facebook. Really, there’s only one way to sum up young people’s relationship with Facebook: it’s complicated.
In this guest post, Dr Natalia Kucirkova, UCL and Professor Sonia Livingstone, (London School of Economics and Political Science), explore ‘screen time’ as an outdated term and why we need to recognise the power of learning through screen-based technologies. Their article is relevant to the Information Law and Policy Centre’s annual conference coming up in November – Children and Digital Rights: Regulating Freedoms and Safeguards.
The idea of “screen time” causes arguments – but not just between children and their anxious parents. The Children’s Commissioner for England, Anne Longfield, recently compared overuse of social media to junk food and urged parents to regulate screen time using her “Digital 5 A Day” campaign.
This prompted the former director of Britain’s electronic surveillance agency, GCHQ, to respond by telling parents to increase screen time for children so they can gain skills to “save the country”, since the UK is “desperately” short of engineers and computer scientists.
Meanwhile, parents are left in the middle, trying to make sense of it all.
But the term “screen time” is problematic to begin with. A screen can refer to an iPad used to Skype their grandparents, a Kindle for reading poetry, a television for playing video games, or a desktop computer for their homework. Most screens are now multifunctional, so unless we specify the content, context and connections involved in particular screen time activities, any discussion will be muddled.
Measuring technology usage in terms of quantity rather than quality is also difficult. Children spend time on multiple devices in multiple places, sometimes in short bursts, sometimes constantly connected. Calculating the incalculable puts unnecessary pressure on parents, who end up looking at the clock rather than their children.
The Digital 5 A Day campaign has five key messages, covering areas like privacy, physical activity and creativity. Its focus on constructive activities and attitudes towards technology is a good start. Likewise, a key recommendation of the LSE Media Policy Project report was for more positive messaging about children’s technology use.
Technology use is complex and takes time to understand. Content matters. Context matters. Connections matter. Children’s age and capacity matters. Reducing this intricate mix to a simple digital five-a-day runs the risk of losing all the nutrients. Just like the NHS’s Five Fruit and Veg A Day Campaign, future studies will no doubt announce that five ought to be doubled to ten.
Another problem will come from attempts to interpret the digital five-a-day as a quality indicator. Commercial producers often use government campaigns to drive sales and interest in their products. If a so-called “educational” app claims that it “supports creative and active engagement”, parents might buy it – but there will be little guarantee that it will offer a great experience. It is an unregulated and confusing market – although help is currently provided by organisations providing evidence-based recommendations such as the NSPCC, National Literacy Trust, Connect Safely, Parent Zone, and the BBC’s CBeebies.
The constant flow of panicky media headlines don’t help parents or improve the level of public discussion. The trouble is that there’s too little delving into the whys and wherefores behind each story, nor much independent examination of the evidence that might (or might not) support the claims being publicised. Luckily, some bodies, such as the Science Media Centre, do try to act as responsible intermediaries.
Let your kids spend more time online to 'save the country', says ex-GCHQ chief https://t.co/NjWPYNPIM6
— Sky News (@SkyNews) August 8, 2017
When it comes to young people and technology, it’s vital to widen the lens – away from a close focus on time spent, to the reality of people’s lives. Today’s children grow up in increasingly stressed, tired and rushed modern families. Technology commentators often revert to food metaphors to call for a balanced diet or even an occasional digital detox, and that’s fine to a degree.
But they can be taken too far, especially when the underlying harms are contested by science. “One-size-fits-all” solutions don’t work when they are taken too literally, or when they become yet another reason to blame parents (or children), or because they don’t allow for the diverse conditions of real people’s lives.
If there is a food metaphor that works for technology, it’s that we should all try some humble pie when it comes to telling others how to live. “Screen time” is an outdated and misguided shorthand for all the different ways of interacting, creating and learning through screen-based technologies. It’s time to drop it.
In this guest post, Vladlena Benson, Kingston University, assesses the need to encourage conscious social media use among the young. Her article is relevant to the Information Law and Policy Centre’s annual conference coming up in November – Children and Digital Rights: Regulating Freedoms and Safeguards.
Teenagers in Britain are fortunate to have access to computers, laptops and smartphones from an early age. A child in the UK receives a smartphone at around the age of 12 – among the earliest in Europe. The natural consequence of this is that children spend a significant amount of their time on the internet. Nearly 20 years or so since the first social networks appeared on the internet, there has been considerable research into their psychological, societal, and health effects. While these have often been seen as largely negative over the years, there is plenty of evidence to the contrary.
A recent report from the Education Policy Institute, for example, studied children’s use of the internet and their mental health. The report found that teenagers value social networks as a way of connecting with friends and family, maintaining their networks of friends, and long distance connections. Teenagers see social networking as a comfortable medium for sharing their issues and finding solutions to problems such as social isolation and loneliness. They are also more likely to seek help in areas such as health advice, unknown experiences, and help with exams and study techniques.
Social networks afford the opportunity to find people with similar interests, or to support teamwork in school projects. In unsettled economic and political times, teenagers use social networks as a means to be heard and to get involved in political activism, as well as volunteering and charitable activities.
Teenagers also leverage social networks to engage with creative projects, and many young artists are first noticed through the exposure offered by the rich networking opportunities of social media, such as musicians on MySpace or photographers on image sharing sites Flickr or Behance. Teenagers looking to pursue careers in art or other creative industries turn to social platforms in order to create their portfolios as well as to create with others.
These opportunities have a positive impact on adolescent character formation and the development of their individual identity, and helps them toward choosing a career path. These choices are made at an early age and to this end social networks are enriching young people’s lives.Risks not to be ignored
On the other hand the report was able to list a substantial list of negative influences stemming from social media use, ranging from time wasting and addictive, compulsive use, to cyber-bullying, radicalisation, stress and sexual grooming to name just a few.
Unsurprisingly governments are concerned with the impact of social networking on the vulnerable. Concern over the uncontrolled nature of social networking has prompted action from parents and politicians. The issue of children roaming freely on social networks became an issue in the recent UK general election, and was mentioned in the Conservative party manifesto, which made a key pledge of “safety for children online, and new rights to require social media companies to delete information about young people as they turn 18”. This is a tall order, as it would require erasing tens of millions of teenagers’ profiles on around 20 different social platforms, hosted in different countries worldwide.
The Conservatives also suggested the party would “create a power in law for government to introduce an industry-wide levy from social media companies and communication service providers to support awareness and preventative activity to counter internet harms”. Awareness-raising is an important step towards encouraging conscious social media use among the young. But despite continuing efforts to educate youngsters about the dangers (and, to be fair, the benefits) of using social media, many are wary of the impact technology may have on overly-social teenagers once outside parental control.
It has been shown that teenagers increasingly use social networks in private, leaving parents outside environments where children are exposed to real-time content and largely unguarded instant communications. The concern raised in the report that “responses to protect, and build resilience in, young people are inadequate and often outdated” is timely. While schools are tasked with educating teenagers about the risks of social media, very few parents are able to effectively introduce controls on the content their children access and monitor the evolving threats that operate online.Speak their language
A recent study of compulsive social media use showed that it is not the user’s age that matters, but their individual motivations. In fact users who are highly sociable and driven by friends towards compulsive social media use suffer physically and socially. On the other hand when users are driven by hedonic (fun-seeking) motivations, their physical health and sociability improves. This explains why teenagers in the UK see social networking as a positive phenomenon that enriches their social life. There is clearly potential to harness these positives.
While the tech giants that run the social networks with billions of users must play their part to ensure the safety of their youngest users, it is also parents’ role to talk openly with their children about their use of social networks and demand expected standards of use. Teenagers have questions about life and are looking for answers to their problems as they go through a challenging time of life. With the prime minister naming “mental health as a key priority” schools, parents, politicians and social networking platforms should help teenagers to build resilience to what they encounter online and how it makes them feel, rather than adopting only a safeguarding approach. It’s interesting to note that 78% of young people who contact the organisation Childline now do so online: teachers, family and friends providing support should make the most of a medium which today’s children and teenagers are comfortable with.
Readers of the Information and Law Policy Centre blog are invited to participate in the second, full-day International Law for the Sustainable Development Goals Workshop at the Department of International Law, University of Groningen, NL.
Our aim with the second track of this one-day Workshop is to explore the right to science’s potential value in the context of technology & knowledge transfer and sustainable development. More specifically, we aim to discuss the role of the right to science as (a) a means to implement the SDGs and related human rights; (b) an enabler of international cooperation regarding technology and knowledge sharing; and (c) a stand-alone human right and the respective obligations of States in enhancing systemic policy and institutional coherence and informing policy development and coordination.
Please find the the detailed Call for Papers available here.
We invite abstract proposals from interested scholars from all disciplines. Proposals should not exceed 500 words in length. Please send your proposals as an attachment to Marlies Hesselman (email@example.com) for Track 1 and to Mando Rachovitsa (firstname.lastname@example.org) for Track 2. The deadline for abstracts is 15 September 2017. All proposals will undergo peer review and notifications of acceptance will be sent out by 20 September 2017.
Draft papers are expected to be delivered by the 15 November 2017 for circulation among participants. We plan to pursue the publication of a special issue as a result of this Workshop.
The Workshop is scheduled to take place on the 24th November 2017 at the University of Groningen and it is part of the 2017-2018 Workshop Series “International Law for the Sustainable Development Goals”.
Information Law and Policy Centre’s Annual Conference 2017 – Children and Digital Rights: Regulating Freedoms and Safeguards
Date 17 Nov 2017, 09:30 to 17 Nov 2017, 17:30
Venue Institute of Advanced Legal Studies, 17 Russell Square, London WC1B 5DR
The Internet provides children with more freedom to communicate, learn, create, share, and engage with society than ever before. For instance, research by Ofcom in 2016 found that 72% of young teenagers (twelve to fifteen) in the UK have social media accounts which are often used for homework groups. 20% of the same group have made their own digital music and 30% have used the Internet for civic engagement by signing online petitions, or sharing and talking online about the news.
Interacting within this connected digital world, however, also presents a number of challenges to ensuring the adequate protection of a child’s rights to privacy, freedom of expression, and safety, both online and offline. These risks range from children being unable to identify advertisements on search engines to bullying in online chat groups. Children may also be targeted via social media platforms with methods (such as fake online identities or manipulated photos/images) specifically designed to harm them or exploit their particular vulnerabilities and naivety.
At the ILPC’s Annual Conference, regulators, practitioners, civil society, and leading academic experts will address and examine the key legal frameworks and policies being used and developed to safeguard these freedoms and rights. These legislative and policy regimes will include the UN Convention on the Rights of the Child, and the related provisions (such as consent, transparency, and profiling) under the UK Digital Charter, and the Data Protection Bill which will implement the EU General Data Protection Regulation.
The ILPC’s Annual Conference and Lecture will take place on Friday 17th November 2017, followed by an evening reception.
Attendance will be free of charge thanks to the support of the IALS and our sponsors, although registration is required as places are limited.
Key speakers, chairs, and discussants at the Annual Conference will provide a range of national and international legal insights and perspectives from the UK, Israel, Australia, and Europe, and will include:
- Baroness Beeban Kidron OBE, Film-maker, Member of The Royal Foundation Taskforce on the Prevention of Cyberbullying, and Founder of 5Rights
- Anna Morgan, Head of Legal, Deputy Data Protection Commissioner of Ireland;
- Lisa Atkinson, Group Manager on Policy Engagement, Information Commissioner’s Office;
- Renate Samson, Chief Executive of Big Brother Watch;
- Graham Smith, Bird & Bird LLP, Solicitor and leading expert in UK Internet Law.
The best papers from the conference’s plenary sessions and panels will be featured in a special issue of Bloomsbury’s Communications Law journal, following a peer-review process. Those giving papers will be invited to submit full draft papers to the journal by 1st November 2017 for consideration by the journal’s editorial team.
About the Information Law and Policy Centre at the IALS:
The Information Law and Policy Centre (ILPC) produces, promotes, and facilitates research about the law and policy of information and data, and the ways in which law both restricts and enables the sharing, and dissemination, of different types of information.
The ILPC is part of the Institute of Advanced Legal Studies (IALS), which was founded in 1947. It was conceived, and is funded, as a national academic institution, attached to the University of London, serving all universities through its national legal research library. Its function is to promote, facilitate, and disseminate the results of advanced study and research in the discipline of law, for the benefit of persons and institutions in the UK and abroad.
The ILPC’s Annual Conference and Annual Lecture form part of a series of events celebrating the 70th Anniversary of IALS in November.
About Communications Law (Journal of Computer, Media and Telecommunications Law):
Communications Law is a well-respected quarterly journal published by Bloomsbury Professional covering the broad spectrum of legal issues arising in the telecoms, IT, and media industries. Each issue brings you a wide range of opinion, discussion, and analysis from the field of communications law. Dr Paul Wragg, Associate Professor of Law at the University of Leeds, is the journal’s Editor in Chief.
In this guest post Lorna Woods, Professor of Internet Law at the University of Essex, explores the EU’s proposed Passenger Name Record (PNR) agreement with Canada. This post first appeared on the blog of Steve Peers, Professor of EU, Human Rights and World Trade Law at the University of Essex.
Opinion 1/15 EU/Canada PNR Agreement, 26th July 2017Facts
Canadian law required airlines, in the interests of the fight against serious crime and terrorism, to provide certain information about passengers (API/PNR data), which obligation required airlines under EU data protection regulations to transfer data to outside the EU. The PNR data includes the names of air passengers, the dates of intended travel, the travel itinerary, and information relating to payment and baggage. The PNR data may reveal travel habits, relationships between two individuals, information on the financial situation or the dietary habits of individuals. To regularise the transfer of data, and to support police cooperation, the EU negotiated an agreement with Canada specifying the data to be transferred, the purposes for which the data could be used, as well as some processing safeguard provisions (e.g. use of sensitive data, security obligations, oversight requirements, access by passengers). The data was permitted to be retained for five years, albeit in a depersonalised form. Further disclosure of the data beyond Canada and the Member States was permitted in limited circumstances. The European Parliament requested an opinion from the Court of Justice under Article 218(11) TFEU as to whether the agreement satisfied fundamental human rights standards and whether the appropriate Treaty base had been used for the agreement.Opinion
The Court noted that the agreement fell within the EU’s constitutional framework, and must therefore comply with its constitutional principles, including (though this point was not made express), respect for fundamental human rights (whether as a general principle or by virtue of the EU Charter – the EUCFR).
After dealing with questions of admissibility, the Court addressed the question of appropriate Treaty base. It re-stated existing principles (elaborated, for example, in Case C‑263/14 Parliament v Council, judgment 14 June 2016, EU:C:2016:435) with regard to choice of Treaty base generally: the choice must rest on objective factors (including the aim and the content of that measure) which are amenable to judicial review. In this context the Court found that the proposed agreement has two objectives: safeguarding public security; and safeguarding personal data [opinion, para 90]. The Court concluded that the two objectives were inextricably linked: while the driver for the need to PNR data was protection of public security, the transfer of data would be lawful only if data protection rules were respected [para 94]. Therefore, the agreement should be based on both Article 16(2) (data protection) and Article 87(2)(a) TFEU (police cooperation). It held, however, that Article 82(1)(d) TFEU (judicial cooperation) could not be used, partly because judicial authorities were not included in the agreement.
Looking at the issue of data protection, the Court re-stated the question as being ‘on the compatibility of the envisaged agreement with, in particular, the right to respect for private life and the right to the protection of personal data’ [para 119]. It then commented that although both Article 16 TFEU and Article 8 EUCFR enshrine the right to data protection, in its analysis it would refer to Article 8 only, because that provision lays down in a more specific manner the conditions for data processing. The agreement refers to the processing of data concerning identified individuals, and therefore may affect the fundamental right to respect for private life guaranteed in Article 7 EUCFR as well as the right to protection to personal data in Article 8 EUCFR. The Court re-iterated a number of principles regarding the scope of the right to private life:
‘the communication of personal data to a third party, such as a public authority, constitutes an interference with the fundamental right enshrined in Article 7 of the Charter, whatever the subsequent use of the information communicated. The same is true of the retention of personal data and access to that data with a view to its use by public authorities. In this connection, it does not matter whether the information in question relating to private life is sensitive or whether the persons concerned have been inconvenienced in any way on account of that interference’ [para 124].
The transfer of PNR data and its retention and any use constituted an interference with both Article 7 [para 125] and Article 8 EUCFR [para 126]. In assessing the seriousness of the interference, the Court flagged ‘the systematic and continuous’ nature of the PNR system, the insight into private life of individuals, the fact that the system is used as an intelligence tool and the length of time for which the data is available.
Interferences with these rights may be justified. Nonetheless, there are constraints on any justification: Article 8(2) of the EU Charter specifies that processing must be ‘for specified purposes and on the basis of the consent of the person concerned or some other legitimate basis laid down by law’; and, according to Article 52(1) of the EU Charter, any limitation must be provided for by law and respect the essence of those rights and freedoms. Further, limitations must be necessary and genuinely meet objectives of general interest recognised by the Union or the need to protect the rights and freedoms of others.
Following WebMindLicenses (Case C‑419/14, judgment of 17 December 2015, EU:C:2015:832, para 81), the law that permits the interference should also set down the extent of that interference. Proportionality requires that any derogation from and limitation on the protection of personal data should apply only insofar as is strictly necessary. To this end and to prevent the risk of abuse, the legislation must set down ‘clear and precise rules governing the scope and application of the measure in question and imposing minimum safeguards’, specifically ‘indicat[ing] in what circumstances and under which conditions a measure providing for the processing of such data may be adopted’ [para 141], especially when automated processing is involved.
The Court considered whether there was a legitimate basis for the processing, noting that although passengers may be said to consent to the processing of PNR data, this consent related to a different purpose. The transfer of the PNR data is not conditional on the specific consent of the passengers and must therefore be grounded on some other basis, within the terms of Article 8(2) EUCFR. The Court rejected the Parliament’s submission that the meaning of ‘law’ be restricted to ‘legislative act’ internally. The Court, following the reasoning of the Advocate General, found that in this regard the international agreement was the external equivalent of the legislative act.
In line with its previous jurisprudence, the Court accepted that public security is an objective of public interest capable of justifying even serious interferences with Articles 7 and 8 EUCFR. It also noted that everybody has the right to security of the person (Art. 6 EUCFR), though this point was taken no further. The Court considered that PNR data revealed only limited aspects of a person’s private life, so that the essence of the right was not adversely affected [para 151]. In principle, limitation may then be possible. The Court accepted that PNR data transfer was appropriate, but not that the test of necessity was satisfied. It agreed with the Advocate General that the categories of data to be transferred were not sufficiently precise, specifically ‘available frequent flyer and benefit information (free tickets, upgrades, etc.)’, ‘all available contact information (including originator information)’ and ‘general remarks including Other Supplementary Information (OSI), Special Service Information (SSI) and Special Service Request (SSR) information’. Although the agreement required the Canadian authorities to delete any data transferred to them which fell outside these categories, this obligation did not compensate for the lack of precision regarding the scope of these categories.
The Court noted that the agreement identified a category of ‘sensitive data’; it was therefore to be presumed that sensitive data would be transferred under the agreement. The Court then reasoned:
any measure based on the premiss that one or more of the characteristics set out in Article 2(e) of the envisaged agreement may be relevant, in itself or in themselves and regardless of the individual conduct of the traveller concerned, having regard to the purpose for which PNR data is to be processed, namely combating terrorism and serious transnational crime, would infringe the rights guaranteed in Articles 7 and 8 of the Charter, read in conjunction with Article 21 thereof [para 165]
Additionally, any transfer of sensitive data would require a ‘precise and particularly solid’ reason beyond that of public security and prevention of terrorism. This justification was lacking. The transfer of sensitive data and the framework for the use of those data would be incompatible with the EU Charter [para 167].
While the agreement tried to limit the impact of automated decision-making, the Court found it problematic because of the need to have reliable models on which the automated decisions were made. These models, in the view of the Court, must produce results that identify persons under a ‘reasonable suspicion’ of participation in terrorist offences or serious transnational crime and should be non-discriminatory. Models/databases should also be kept up-to-date and accurate and subject to review for bias. Because of the error risk, all positive automated decisions should be individually checked.
In terms of the purposes for processing the data, the definition of terrorist offences and serious transnational crime were sufficiently clear. There were however other provisions, allowing case-by-case assessment. These provisions (Article 3(5)(a) and (b) of the treaty) were found to be too vague. By contrast, the Court determined that the authorities who would receive the data were sufficiently identified. Further, it accepted that the transfer of data of all passengers, whether or not they were identified as posing a risk or not, does not exceed what is necessary as passengers must comply with Canadian law and ‘the identification, by means of PNR data, of passengers liable to present a risk to public security forms part of border control’ [para 188].
Relying on its recent judgment in Tele2/Watson (Joined Cases C‑203/15 and C‑698/15, EU:C:2016:970), which I discussed here, the Court reiterated that there must be a connection between the data retained and the objective pursued for the duration of the time the data are held, which brought into question the use of the PNR data after passengers had disembarked in Canada. Further, the use of the data must be restricted in accordance with those purposes. However,
where there is objective evidence from which it may be inferred that the PNR data of one or more air passengers might make an effective contribution to combating terrorist offences and serious transnational crime, the use of that data does not exceed the limits of what is strictly necessary [para 201].
Following verification of passenger data and permission to enter Canadian territory, the use of PNR data during passengers’ stay must be based on new justifying circumstances. The Court expected that this should be subject to prior review by an independent body. The Court held that the agreement did not meet the required standards. Similar points were made, even more strongly, in relation to the use of PNR data after the passengers had left Canada. In general, this was not strictly necessary, as there would no longer be a connection between the data and the objective pursued by the PNR Agreement such as to justify the retention of their data. PNR data may be stored in Canada, however, when particular passengers present a risk of terrorism of serious transnational crime. Moreover, given the average lifespan of international serious crime networks and the duration and complexity of investigations relating to them, the Court did not hold that the retention of data for five years went beyond the limits of necessity [para 209].
The agreement allows PNR data to be disclosed by the Canadian authority to other Canadian government authorities and to government authorities of third countries. The recipient country must satisfy EU data protection standards; an international agreement between the third country and the EU or an adequacy decision would be required. There is a further, unlimited and ill-defined possibility of disclosure to individuals ‘subject to reasonable legal requirements and limitations … with due regard for the legitimate interests of the individual concerned’. This provision did not satisfy the necessity test.
To ensure that the individuals’ rights to access their data and to have data rectified is protected, in line with Tele2/Watson, passengers must be notified of the transfer of their PNR data to Canada and of its use as soon as that information is no longer liable to jeopardise the investigations being carried out by the government authorities referred to in the envisaged agreement. In this respect, the agreement is deficient. While passengers are told that the data will be used for security checks/border control, they are not told whether their data has been used by the Canadian Competent Authority beyond use for those checks. While the Court accepted that the agreement provided passengers with a possible remedy, the agreement was deficient in that it did not guarantee in a sufficiently clear and precise manner that the oversight of compliance would be carried out by an independent authority, as required by Article 8(3) EUCFR.Comment
There are lots of issues in this judgment, of interest from a range of perspectives, but its length and complexity means it is not an easy read. Because of these characteristics, a blog – even a lengthy blog – could hardly do justice to all issues, especially as in some instances, it is hardly clear what the Court’s position is.
On the whole the Court follows the approach of its Advocate General, Mengozzi, on a number of points specifically referring back to his Opinion. There is, as seems increasingly to be the trend, heavy reliance on existing case law and it is notable that the Court refers repeatedly to its ruling in Tele2/Watson. This may be a judicial attempt to suggest that Tele2/Watson was not an aberration and to reinforce its status as good law, if that were in any doubt. It also operates to create a body of surveillance law rulings that are hopefully consistent in underpinning principles and approach, and certainly some of the points in earlier case law are reiterated with regards to the importance of ex ante review by independent bodies, rights of redress and the right of individuals to know that they have been subject to surveillance.
The case is of interest not only in regards mass surveillance but more generally in relation to Article 16(2) TFEU. It is also the first time an opinion has been given on a draft agreement considering its compatibility with human rights standards as well as the appropriate Treaty base. In this respect the judgment may be a little disappointing; certainly on Article 16, the Court did not go into the same level of detail as in the AG’s opinion [AG114-AG120]. Instead it equated Article 16 TFEU to Article 8 EUCFR, and based its analysis on the latter provision.
As a general point, it is evident that the Court has adopted a detailed level of review of the PNR agreement. The outcome of the case has widely been recognised as having implications, as –for example – discussed earlier on this blog. Certainly, as the Advocate General noted, possible impact on other PNR agreements [AG para 4] which relate to the same sorts of data shared for the same objectives. The EDPS made this point too, in the context of the EU PNR Directive:
Since the functioning of the EU PNR and the EU-Canada schemes are similar, the answer of the Court may have a significant impact on the validity of all other PNR instruments … [Opinion 2/15, para 18]
There are other forms of data sharing agreement, for example, SWIFT, the Umbrella Agreement, the Privacy Shield (and other adequacy decisions) the last of which is coming under pressure in any event (DRI v Commission (T-670/16) and La Quadrature du Net and Others v Commission (T-738/16)). Note that in this context, there is not just a question of considering the safeguards for protection of rights but also relates to Treaty base. The Court found that Article 16 must be used and that – because there was no role for judicial authorities, still less their cooperation – the use of Article 82(1)(d) is wrong. It has, however, been used for example in regards to other PNR agreements. This means that that the basis for those agreements is thrown into doubt.
While the Court agreed with its Advocate General to suggest that a double Treaty base was necessary given the inextricable linkage, there is some room to question this assumption. It could also be argued that there is a dominant purpose, as the primary purpose of the PNR agreement is to protect personal data, albeit with a different objective in view, that of public security. In the background, however, is the position of the UK, Ireland and Denmark and their respective ‘opt-outs’ in the field. While a finding of a joint Treaty base made possible the argument of the Court that:
since the decision on the conclusion of the envisaged agreement must be based on both Article 16 and Article 87 TFEU and falls, therefore, within the scope of Chapter 5 of Title V of Part Three of the FEU Treaty in so far as it must be founded on Article 87 TFEU, the Kingdom of Denmark will not be bound, in accordance with Articles 2 and 2a of Protocol No 22, by the provisions of that decision, nor, consequently, by the envisaged agreement. Furthermore, the Kingdom of Denmark will not take part in the adoption of that decision, in accordance with Article 1 of that protocol. [para 113, see also para 115]
The position would, however, have been different had the agreement be found to have been predominantly about data protection and therefore based on Article 16 TFEU alone.
Looking at the substantive issues, the Court clearly accepted the need for PNR to challenge the threat from terrorism, noting in particular that Article 6 of the Charter (the “right to liberty and security of person”) can justify the processing of personal data. While it accepted that this resulted in systemic transfer of large quantities of people, we see no comments about mass surveillance. Yet, is this not similar to the ‘general and indiscriminate’ collection and analysis rejected by the Court in Tele2/Watson [para 97], and which cannot be seen as automatically justified even in the context of the fight against terrorism [para 103 and 119]? Certainly, the EDPS took the view in its opinion on the EU PNR Directive that “the non-targeted and bulk collection and processing of data of the PNR scheme amount to a measure of general surveillance” [Opinion 1/15, para 63]. It may be that the difference is in the nature of the data; even if this is so, the Court does not make this argument. Indeed, it makes no argument but rather weakly accepts the need for the data. On this point, it should be noted that “the usefulness of large-scale profiling on the basis of passenger data must be questioned thoroughly, based on both scientific elements and recent studies” [Art. 29 WP Opinion 7/2010, p. 4]. In this aspect, Opinion 1/15 is not as strong a stand as Tele2/Watson [c.f para 105-106]; it seems that the Court was less emphatic about significance of surveillance even than the Advocate General [AG 176].
In terms of justification, while the Court accepts that the transfer of data and its analysis may give rise to intrusion, it suggests that the essence of the right has not been affected. In this it follows the approach in the communications data cases. It is unclear, however, what the essence of the right is; it seems that no matter how detailed a picture of an individual can be drawn from the analysis of data, the essence of the right remains intact. If the implication is that where the essence of the right is affected then no justification for the intrusion could be made, a narrow view of essence is understandable. This does not, however, answer the question of what the essence is and, indeed, whether the essence of the right is the same for Article 7 as for Article 8. In this case, the Court has once again referred to both articles, without delineating the boundaries between them, but then proceeded to base its analysis mainly on Article 8.
In terms of relationship between provisions, it is also unclear what the relationship is between Art 8(2) and Art 52. The Court bundles the requirements for these two provisions together but they serve different purposes. Article 8(2) further elaborates the scope of the right; Article 52 deals with the limitations of Charter rights. Despite this, it seems that some of the findings will apply Article 52 in the context of other rights. For example, in considering that an international agreement constitutes law for the purposes of the EUCFR, the Court took a broader approach to meaning of ‘law’ than the Parliament had argued for. This however seems a sensible approach, avoiding undue formality.
One further point about the approach to interpreting exceptions to the rights and Article 52 can be made. It seems that the Court has not followed the Advocate General who had suggested that strict necessity should be understood in the light of achieving a fair balance [AG207].
Some specific points are worth highlighting. The Court held that sensitive data (information that reveals racial or ethnic origin, political opinions, religious or philosophical beliefs, trade-union membership, information about a person’s health or sex life) should not be transferred. It is not clear what interpretation should be given to these data, especially as regards proxies for sensitive data (e.g. food preferences may give rise to inferences about a person’s religious beliefs).
One innovation in the PNR context is the distinction the Court introduced between use of PNR data on entry, use while the traveller is in Canada, and use after the person has left, which perhaps mitigates the Court’s acceptance of undifferentiated surveillance of travellers. The Court’s view of the acceptability of use in relation to this last category is the most stringent. While the Court accepts the link between the processing of PNR data on arrival, after departure the Court expects that link to be proven, and absent such proof, there is no justification for the retention of data. Does this mean that on departure PNR data of persons who are not suspected of terrorism or transnational crime should be deleted at the point of their departure? Such a requirement surely gives rise to practical problems and would seem to limit the Court’s earlier acceptance of the use of general PNR data to verify/update computer models [para 198].
One of the weaknesses of the Court’s case law so far has been a failure to consider investigatory techniques, and whether all are equally acceptable. Here we see the Court beginning to consider the use of automated intelligence techniques. While the Court does not go into detail on all the issues to which predictive policing and big data might give rise, it does note that models must be accurate. It also refers to Article 21 EUCFR (discrimination). In that this section is phrased in general terms, it has potentially wide-reaching application, potentially even beyond the public sector.
The Court’s judgment has further implications as regards the sharing of PNR and other security data with other countries besides Canada, most notably in the context of EU/UK relations after Brexit. Negotiators now have a clearer indication of what it will take for an agreement between the EU and a non-EU state to satisfy the requirements of the Charter, in the ECJ’s view. Time will tell what impact this ruling will have on the progress of those talks.
Barnard & Peers: chapter 25
JHA4: chapter II:9
You can follow Steve’s blog – and get other updates on EU law developments – on Facebook and Twitter.
On Facebook, simply ‘like’ the blog on its dedicated Facebook page.
On Twitter, you can follow the blog editor, Steve Peers.
Marion Oswald Senior Fellow at The Centre for Information Rights, University of Winchester contributes to the blog, examining the British and Irish Law Education and Technology Association (BILETA) consultation run by The Centre of Information Rights. The consultation took place on the 7 June 2017 and concerned the impact of broadcast and social media on the privacy and best interests of young children.
In 1985, in his book ‘Amusing Ourselves to Death’, Professor Neil Postman warned us that:
‘To be unaware that a technology comes equipped with a programme for social change, to maintain that technology is neutral…is…stupidity plain and simple.’
Postman was mainly concerned with the impact of television, with its presentation of news as ‘vaudeville’ and thus its influence on other media to do the same. He was particularly exercised by the co-option by television of ‘serious modes of discourse – news, politics, science, education, commerce, religion’ and the transformation of these into entertainment packages. This ‘trivial pursuit’ information environment, he argued, risked amusing us into indifference and a kind of ‘culture-death.’ Can a culture survive ‘if the value of its news is determined by the number of laughs it provides’?
We appear not to have heeded Postman’s warnings. Many might say they are over-blown. 21st century digital culture continues to emphasise the image, now often combined with ‘bite-sized’ written messages. We have instant 24/7 access to information and rolling news. We are fascinated by reality programming and digital technologies that allow us to scrutinise each other’s lives. Having lived through this digital revolution, I know and appreciate its benefits, especially in relation to the expansion of knowledge and horizons, and to freedom of expression. Like many others, however, I have concerns. What, for instance, would Postman have made of ‘sharenting’; of the ‘YouTube Families’ phenomenon; of the way that younger and younger children now feature on mainstream broadcasts, with public comment via social media using the inevitable hashtag?
It was such concerns that inspired the BILETA consultation workshop held on 7 June 2017 at IALS to discuss the legislative, regulatory and ethical framework surrounding the depiction of young children on digital, online and broadcast media. The full report from the workshop is now available here. As was to be expected, the discussion was wide-ranging with a variety of opinions expressed. The report’s authors have attempted to distil the debate into proposals which we hope will move the debate forward, and generate further discussion and no doubt criticism! (The recommendations represent the views of the report’s authors and do not necessarily represent the views of workshop participants.)
The workshop focused first on a child’s ‘reasonable expectation of privacy’, a concept that was described by one participant as ‘highly artificial and strained’. Why should a child’s privacy depend upon his or her parent’s privacy expectations, it was asked? The report’s authors propose that young children should have a privacy right independent from their parents’ privacy expectations. Such a right could be trumped by other rights or interests, for instance public interest exceptions relating to news and current affairs reporting, journalism and the arts, and the parents’ right to freedom of expression. The report’s authors recommend that there should however be a clearer requirement and process for the child’s interests to be considered alongside the potential benefits of the publication.
Mainstream broadcasters take a variety of approaches to the depiction, and identification, of young children in documentary and science programming. The media should continue to reflect the lives of children and it is in no-one’s interests to have a media where children simply do not appear for fear of the risk of potential harm. Programmes made by highly regulated broadcasters, ensuring wellbeing of children is of paramount importance, can help to set the high ethical watermark in this area for other forms of media to follow. We should continue to monitor the inclusion of young children in ‘Science Entertainment’ broadcasts, however, and the parallel impact of social media. The report’s authors also recommend that more research be undertaken into the impact of broadcast media exposure of young children to understand what effect it has on them, both positive and negative. Once these effects are more fully understood, actions can be taken to reduce any potential harm.
There was some support during the workshop for the view that online intermediaries should take on more responsibility for activities and content that may be harmful to young children. The report’s authors recommend more consistency in terms of compliance and regulation between regulated broadcasters and non-mainstream digital media/social media. This could enhance protection to children in ‘YouTube families’ and other instances where there are no or limited checks on what is being put into the public domain. ‘Controller hosts’ (such as Facebook, YouTube and Twitter) and ‘independent intermediaries’ (such as Google) should have a duty of care to consider young children’s privacy and best interests in their operations. Further research should be undertaken into the potential of image-matching, tracking and content moderation technologies, in controlling the extent to which information and images relating to a young child can be copied, re-contextualised and re-shown in a different context to the original post or publication.
The introduction of a Children’s Digital Ombudsman could provide a way for children’s interests to be better represented in relation to all forms of digital publications. We shouldn’t put all our eggs in the basket of the so-called ‘right to be forgotten’. Before Postman’s warnings become irreversible reality, let’s consider how we want our young children to be treated in the offline world and strive to hold the digital world to the same standards.
 Defined by David Erdos in Erdos, David, ‘Delimiting the Ambit of Responsibility of Intermediary Publishers for Third Party Rights in European Data Protection: Towards a Synthetic Interpretation of the EU acquis’ (June 27, 2017). Available at SSRN: https://ssrn.com/abstract=2993154
On the 17 November 2017, the Information Law & Policy Centre will be holding their 3rd Annual Workshop and Lecture entitled, Children and Digital Rights: Regulating Safeguards and Freedoms. See the Call for Papers.
Readers of the Information and Law Policy Centre blog may be interested in the following event held by Maastricht University.
The academic conference addresses the question as to how surveillance is perceived from the perspective of three main stakeholders involved in the process of surveillance: surveillance authorities, data subjects and companies. The conference tackles precisely this issue. It brings together the perspective of those stakeholders and provides informative insights of academics from both the EU and the US on how these issues interplay in different contexts.Programme 9:30-10:00 Registration 10:00-10:30 Keynote speech:
“The EU’s approach towards surveillance”, Philippe Renaudière, Data Protection Officer, European Commission 10:30-12:00 Panel I: The perspective of the authorities who exercise surveillance 12:00-13:30 Lunch 13:30-15:00 Panel II: The perspective of Individuals subject to surveillance 15:00-15:30 Coffee break 15:30-17:00 Panel III: Means of Surveillance 17:00-17:30 Closing remarks, Giovanni Buttarelli, EDPS 17:30-18:00 Wrap-up 18:00 Network Cocktail
Panel I: The perspective of the authorities who exercise surveillance
Surveillance authorities currently face several challenges, ranging from tackling the consequences of the recent Paris and Brussels terrorist attacks, the issue of an alledged lack of data sharing for security purposes (prevention, detection and investigation of serious crimes, including terrorism), reconciliating the migrant crisis and the challenges it brings along with border protection concerns. On a broader scale, negotiations surrounding the Privacy Shield and the finalisation of the General Data Protection Regulation are on the agenda. Moreover, the enforcement of surveillance-related decisions or judgments leads to increasing constitutionalization of this field. Within the first perspective, the conference will address the following issues:
- Does enhanced surveillance always lead to enhanced security?
- Which other means, apart from surveillance, are available to foster security? In this context, does surveillance necessailry imply bulk data retention? Or could alternative approaches be seized?
- Intelligence cooperation and data exchange between authorities within and outside EU need to find common understading at the EU level on: What is intelligence, who can have access to collected information and for what purposes?
- What impact does the lack of EU compentence on matters of national security have on intelligence sharing within or outside EU?
- The growing demand of reconciliation of security and surveillance with privacy: do new surveillance measures guarantee the respect of both?
- The future of the Privacy Shield: enforcement and relevance for US businesses
- The risks and benefits of surveillance by private controllers (EU and US based)
- Blurring boundaries between surveillance and profiling techniques and the use of profiling in surveillance
- The constitutionalization effect on the field of privacy through surveillance enforcement
Chair and discussant: Francesca Galli, Maastricht University
- Xavier Tracol, Senior Legal Officer at Eurojust: “From prohibiting generalised mass surveillance to permitting targeted retention of both traffic and location data for the purpose of fighting serious crime”
- Christiane Hoehn, Principal Advisor to the EU Counter-Terrorism Coordinator: “The information sharing environment in counter-terrorism: Challenges and perspectives”
- Elif Erdemoglu, Lecturer at The Hague University of Applied Sciences / Researcher at Cybersecurity Center of Expertise, The Hague University of Applied Sciences: “The risks and benefits of surveillance by private controllers (EU and US based)”
- Elspeth Guild, Jean Monnet Professor ad personam at Queen Mary, University of London as well as at the Radboud University Nijmegen, Netherlands
Surveillance often encompasses the general public and is not targeted to particular individuals who are suspects of being involved in serious crime activities, including the preparation of a terrorist attack. Within this perspective, it will be discussed how surveillance should be better regulated in order to achieve its goal most efficiently, whether the expansion of surveillance means is always beneficial to security and what are data subjects’ rights with regard to surveillance. More precisely, the following topics will be addressed:
- The necessity of effective enforcement of data subjects’ rights with regard to surveillance
- How broad should surveillance be – applying only to suspects or general public?
- Do surveillance policies affect certain communities disproportionally and how could this be addressed?
- What are the necessary limitations of surveillance and the relevant criteria in this regard?
- The undefined notions of ‘public security’ and ‘national security’ in EU Treaties but also problematic at national legal contexts?
- The issue of consent in surveillance
Chair and discussant: Maja Brkan, Maastricht University
- Gloria Gonzalez Fuster, Reasearch Professor at LSTS at Vrije Universiteit Brussels: “Who is the data subject: the surveillance perspective”
- Lorna Woods, Chair of Internet Law, School of Law at the University of Essex: “The Investigatory Powers Act: bulk powers under control?”
- Ike Kamphof, Assistant Professor at the Department of Philosophy at Maastricht University: “Securing Privacy. How Homecare Surveillance Shows the Need for a Civil Art next to Rules.”
The third issue addressed will be the perspective of the means of surveillance and the interplay between, on the one hand, legal limitations and possibilities in this regard and, on the other hand, the constant technical development of innovative means of surveillance. Encryption, Privacy by Design and by Default, anonymization, dealing with big and raw data have become a part of constant legal and political debate in Europe and the world. The recent Apple dispute in the US epitomizes the importance of this debate. Therefore, this perspective will address the following issues:
- Legal regulation of technical means of surveillance: do the rigidity of the legal regime and the incapacity to quickly adapt to technical changes prevent more effective surveillance?
- The (un)necessary legal limitations of technical means of surveillance
- The policies of private controllers and the newest technical developments: in the absence of comprehensive legal regime, are private controllers leading the way in regulation?
Chair and discussant: Sergio Carrera, CEPS and Vigjilenca Abazi, Maastricht University
- Rocco Bellanova, Post-doctoral researcher at University of Amsterdam (UVA): “Testing (Surveillance) Devices? Data protection instruments beyond compliance”
- Annika Richterich, Assistant Professor in Digital Culture Literature and Art, Faculty of Arts and Social Sciences at Maastricht University: “Hacking Surveillance: Civic Tech Monitoring as (Data) Activism”
- Anna Dimitrova, Associate Professor in International affairs, Department of International Affairs at ESSCA School of Management, Paris: “Balancing National Security and Data Protection: The Role of EU and US Policy-makers and Courts before and after the NSA Affair”
- Federico Fabbrini, Full Professor of Law at the School of Law & Government of Dublin City University
The event is subsidised by the University Fund Limburg SWOL, the academic association for contemporary European Studies UACES and the Centre for European Policy Studies CEPS.
How has the law adapted to the emergence and proliferation of social media tools and digital technology? Furthermore, how successful has the law been in governing the challenges associated with an ongoing reformulation of our understandings of public and private spaces in the online environment?
These were the key questions discussed by a panel of experts at the Information Law and Policy Centre earlier this month. The event heralded the launch of a new book entitled the ‘Legal Challenges of Social Media’ edited by Dr David Mangan (City Law School) and Dr Lorna Gillies (University of Strathclyde). A number of the book’s authors provided insights into the contents of their individual chapters.
Social Media and Press Regulation
Professor Ian Walden began proceedings with a discussion of his chapter on press regulation. His chapter was informed by his own experience on the board of the Press Complaints Commission (PCC) between 2009 and 2014.
Walden started by addressing the question of what constitutes “press law”. Walden highlighted that for the most part journalists and editors are subject to the same law as most people – there is no special ‘public interest’ defence or journalistic exemption for hacking into the voicemail of a mobile phone user for example. At the same time, journalists abide (to varying degrees) to an Editors’ Code which goes beyond the provisions of the law. In this context, the online environment and social media has rendered press regulation even more complex in a number of ways.
First, a converging media landscape has led to newspapers and magazines such as the Guardian, Financial Times, Elle and OK using video content which the online video regulator, the Association for Television on Demand (ATVOD), ruled was subject to the EU Audiovisual Media Services (AVMS) Directive in 2011. Second, Walden highlighted that social media has provoked questions about how journalists use ‘private’ information posted to sites like Facebook to source and report news stories. Third, the transnational nature of social media has significantly complicated issues around what constitutes ‘publication’ including in the provisions of (super)-injunctions and in cases concerning the ‘right to be forgotten’.
Ultimately, Walden questioned whether the current regulatory regime was sufficient in light of these challenges, but indicated that the prospect of the press submitting to a state regulator remained a distant prospect.
Reforming the Contempt of Court Act 1981
Another central plank of UK law for any working journalist is the Contempt of Court Act 1981. Where previously, however, reporting restrictions contained within the act were almost uniquely relevant to media organisations, social media has now made potential ‘accidental journalists’ of us all. In his chapter, Dr Daithí Mac Síthigh (Newcastle University) highlighted how the judiciary has wrestled with the interpretation of the Contempt of Court Act in light of the fact that jurors now have access both to all manner of information and the means of publication.
Mac Síthigh argued that the former Attorney General, Dominic Grieve’s thinking on the issue has alternated. On the one hand, Grieve has indicated that a few individual Twitter users are not likely to have much influence; it is the mainstream media who must comply with the Act of 1981. On the other hand, Grieve has noted that while the mainstream media do comply – for the most part – with the 1981 Act, various other individuals and jurors publishing online are not respecting the law.
In the last few years, the Law Commission has undertaken significant work on Contempt of Court considering – among others – the question of whether a publisher is liable for prejudicial material posted online, where the material has been published entirely legitimately prior to legal proceedings becoming active. In 2015, a new criminal offence for jurors conducting prohibited research – a Law Commission recommendation – was implemented in section 71 of the Criminal Justice and Courts Act 2015.
Protecting Freedom of Expression Online: Article 10 and Article 8?
Moving away from the law as it relates directly to the media, Professor Lorna Woods (University of Essex) noted that current debate around the internet and social media use often focuses on the interference of the state in Article 10 of the European Convention on Human Rights‘ (ECHR) right to freedom of expression. This is evidently an important issue, but she contended that perhaps we should also be looking at the state’s obligations to actively and positively protect freedom of expression. She suggested that although Article 10 does envisage some positive obligations on the part of the state, it is perhaps not always the best instrument for this purpose. Woods proposed that Article 8 ECHR’s notion of ‘respect for private life’ seems to suggest a “greater level of positive obligation” on behalf of the state than can be found in Article 10 ECHR. Article 8 ECHR is not just about privacy, she argued, it includes the spaces where we interact with society. In this regard, she suggested that rather than “obsessing” over Article 10 ECHR, recourse to Article 8 ECHR in cases concerning social networking sites ought to be a consideration in protecting our human rights. After all, social networking sites and internet-connected technologies are increasingly making more public those spaces which we previously deemed to be entirely private.
A Right to Post-mortem Privacy?
For Edina Harbinja, (University of Hertfordshire) the question of privacy does not end when we die. Her remarks based on her book chapter considered whether a right of privacy beyond death is necessary in light of our everyday social networking habits. Her study of Facebook’s policies for deceased users identifies what she regards as several ‘contradictory’ options for Facebook profiles after death including: memorialisation of the profile which can be requested by family/next of kin; the submission of a request for deactivation or removal of a deceased user; and the relatively recent option (2015) of providing Facebook with a legacy contact who can administer the profile.
Harbinja highlighted that the EU’s General Data Protection Regulation – which the UK is intending to implement regardless of Brexit – does not comprehensively envisage protection of the data of the deceased, but does leave open the possibility of member states making their own provisions. She also suggested that the collection of personal data on platforms such as Facebook demanded legal reforms be considered in the areas of copyright and in legislation containing traditional understandings of property.
Social Media and The Defamation Act 2013
In closing the panel, Lorna Gillies (University of Strathclyde) returned to a theme addressed earlier on by Ian Walden, namely the complications arising from the transnational nature of social media. Gillies’ remarks concerned the Defamation Act 2013 and the UK’s impending exit from the European Union. Under section 1 of the 2013 Act, a claimant must demonstrate ‘serious harm’ to their reputation for it to be actionable in the courts. Moreover, serious harm must be demonstrated in the English court system raising questions as to how English law will interact with European law in the future – particularly in cases where the defendant in any defamation claim is based in another jurisdiction.
Playing ‘catch up’? Reforming the Law in a Social Media Age
Taken together the panellists’ presentations highlighted several key themes. Since its emergence and normalisation, social media has challenged lawmakers and the legal profession by complicating definitional understandings of what we regard as ‘published’ ‘content’ and who we regard as ‘publishing’ ‘media producers’. Social media has also blurred the boundaries of public and private spaces, while creating a globally connected world which traverses both national and supra-national jurisdictions. These trends have rendered some existing legal provisions inadequate, complicated others, and led to completely new legislation. The overriding impression given by the panel was the sense that for the foreseeable future the law will continue to play ‘catch-up’.
Daniel Bennett, Research Assistant, Information Law and Policy Centre
In this guest post, Professor William Webster outlines the objectives of the civil engagement strand of the National Surveillance Camera Strategy. He is the Director at the Centre for Research into Information, Surveillance and Privacy (CRISP), Professor of Public Policy and Management at the University of Stirling, and is leading the civil engagement strand. This post first appeared on the blog of the Surveillance Camera Commissioner, Tony Porter.
It’s often said that the UK is the one of the most surveyed countries in the world with some reports estimating over 6 million CCTV cameras in the UK – surveillance cameras are everywhere and have become a familiar sight on our streets and in shopping centres, schools, hospitals and airports (etc.). For me, one of the pressing issues is whether members of the public, when they see a surveillance camera, know or understand why it is there, who is operating it and what it does. In some cases I suspect these questions cannot be answered. The objective of the civil engagement strand is to make information freely available to the public about the operation of surveillance camera systems.
We know surveillance happens and given the current threat level is severe I think most people expect CCTV surveillance to take place. Whilst surveillance is in use organisations must put the individual at the core of what they do to ensure that they are kept safe – but this must happen without infringing their basic human rights contained with Article 8 of the European Convention on Human Rights or compromising their rights under national and European Data Protection rules.Engaging citizens
In this work strand we want to engage citizens and civil society about the use of surveillance camera systems and associated technologies (such as automatic facial recognition). We want to raise awareness and encourage discussion about the use of such systems. What’s important is to build public awareness and encourage debate about surveillance and how it is conducted on our behalf. Here, the intention is that the better governance of surveillance cameras can only be realised through enhanced public debate about their role in society.
Technology is advancing quickly and we live in a world where body worn video, dash and head cams are increasingly commonplace. Drones are taking off and automatic facial recognition is no longer the stuff of science fiction. As technology advances so does the potential to intrude into the lives of citizens – and as surveillance cameras are computerised and automated it makes it even harder to know what each cameras is doing. So, public trust and support for surveillance needs to be balanced with our needs and expectations for personal protection and privacy, and it is important that the levels and types of surveillance realised through CCTV is delivered in the public interest.Civil engagement
Our civil engagement plan has now been published and it aims to ensure that:
- Citizens have free access to information relating to the operation of surveillance cameras,
- Citizens have a better understanding of their rights in relation to the operation of surveillance cameras,
- Citizens have an understanding of how surveillance cameras function and are used, and
- Organisations have an understanding of the information relating to the operation of surveillance cameras that they should make available to citizens.
I’d be interested to hear what people think about the plan so please let me know by commenting on the blog of the Surveillance Camera Commissioner.
Over the next 3 years we will be working to make sure that civil engagement happens across the strategy but also encouraging organisations to talk to the people their surveillance cameras monitor, to publish information about the systems they use, why they use them and what happens to the personal data they collect.
Look out for some of the events we will be holding as part of the strategy and make sure you sign up for email alerts for Tony’s blog and also follow him on Twitter.
Professor William Webster
Readers of the Information and Law Policy Centre blog may be interested in the following ECREA event.
The Future of Media Content:
Interventions and Industries in the Internet Era
15 – 16 September 2017
The “Communication Law and Policy” and “Media Industries and Cultural Production” Sections of the European Communications Research and Education Association (ECREA) invite you to their 2017 joint workshop on The Future of Content: Interventions and Industries in the Internet Era, hosted by the University of East Anglia’s School of Politics, Philosophy, Language and Communication Studies. This unique opportunity will bring together those investigating the processes of production and distribution with those studying the policy and regulation governing those processes.
Renowned Prof Eli Noam from Columbia University, NY will deliver the keynote address. A keynote panel of industry and policy actors will additionally set the tone for a day and a half of research-based discussions on trends and challenges.
Media and communications industries have changed dramatically over the past decade and both businesses and policy makers are struggling to adapt. Legacy media companies engaged in cultural and news production are trying to change their business models in a manner that will allow them to survive in the face of increased competition for advertising income and the constraints of having a new breed of intermediaries between them and their audiences.
Policy makers are looking beyond the traditional investment in public service broadcasting and content quotas for new interventions and policy mechanisms that might encourage content production and distribution. One of the biggest challenges is defining the landscape of actors, markets and relationships in which content is created and disseminated – from the YouTube star making and reaching millions from a bedroom to the public service broadcaster (PSB) that is now managing big data for its online audience and negotiating with service providers for zero-rating carriage in order to reach its audiences with sufficient speed and stability.
This joint workshop will include panels and cutting edge paper presentations from a broad range of disciplines, interested in the policy, production and business of content and its carriage.
Location: Julian Study Centre, UEA
Friday 15 September
Registration from 10:00
11:00 – 12:30 YECREA session for professional development of young scholars
12:30 – 13:30 Lunch (own organization)
13:30 – 14:30 Keynote sponsored by UEA’s Centre for Competition Policy (CCP) Professor Eli Noam, Columbia University, NY
14:30 – 16:00 Keynote industry and policy stakeholder panel
16:00 – 16:15 Break
16:15 – 17:45 Panel: How media institutions are adapting to the increasingly non-linear, mobile environment
19:00 Conference Dinner at The Library, Norwich
Saturday 16 September
09:30 – 11:00 Panel: The processes and discourses of policy interventions in media
11:00 – 11:15 Break
11:15 – 12:15 Panel: The changing systems for funding quality content
12:15 – 13:15 Panel: The potential of fans in content production industries
13:15 – 14:15 Lunch (provided)
14:15 – 15:15 Panel: Algorithms and Platforms in media markets: new roles between content and consumers
15:15 – 16:15 Panel: Redefining journalism and the public in the new news media environment
16:15 – 17:00 Break
17:00 – 18:30 Panel: From regulating to “chilling”: the application of law to communications and cultural expression
18:30 Closing remarks
Cost: £50 for waged participants and £40 for non-waged participants and those from ECREA recognized “soft-currency” countries. Includes facilities, organisation, all coffee breaks and lunch on Saturday
Optional Conference Dinner: cost to be given upon registration
Please contact email@example.com with any queries
*This programme is designed with the expectation that all accepted papers will be presented, so remains preliminary and subject to change until registration is complete. Panel length varies depending on the number of presenters included, and intending ample time for discussion.
Legal researchers might be interested in the following fellowship opportunity at UK Parliament…
The UK Parliament is currently piloting an academic fellowship scheme that offers academic researchers, from different subject areas and at every stage of their career, the opportunity to work on specific projects from inside Westminster’s walls.
We are now in the second phase of this scheme. This involves an ‘Open call’ which offers academics the opportunity to come and work in Parliament on a project of their own choosing, as long as they can demonstrate that it is relevant, and will contribute, to the work of Parliament.
One area of interest to Parliament is the impact of Parliament on legislation. We are interested in working with academics with knowledge and/or experience in identifying, tracking and assessing impact to help us to understand better, and identify empirically, the influence of MPs and Peers’ scrutiny on legislation.
As a bill passes through parliament, MPs and Peers examine the proposals contained within it at both a general (debating the general principles and themes of the Bill) and detailed level (examining the specific proposals put forward in the bill, line-by-line). More information about the different stages in the passing of a bill is provided on the parliamentary website. In so doing, MPs and Peers debate the key principles and main purpose/s of a bill and flag up any concerns or specific areas where they think amendments (changes) are needed.
We are interested in developing a series of case studies that examine how Peers’ scrutiny of legislation has shaped the focus, content or tone of legislation as it becomes an Act (given Royal Assent). This can include:
- Direct influence, for example an amendment tabled by a Peer is successful and is agreed to by the government and incorporated directly into the bill.
- Indirect influence, for instance when an amendment tabled by a Peer is not successful but the substance of it is subsequently introduced by the government itself (and when the role of the Peer that tabled it in the first instance is not acknowledged).
We envisage that the case studies will look at a government bill scrutinized by the House of Lords and trace the outcome/s of amendments tabled and debated at each stage of the bill’s scrutiny.
The choice of bills to focus on will be decided in conjunction with the academic. This will require the Fellow to:
- understand the intentions of the amendments tabled
- understand how the amendment related to, and interacted with the bill as drafted
- produce an explanation of the outcome in each case
- draft a concise written account of the House’s impact on the bill.
The Scheme is open to academics (researchers with PhDs) employed at any of the 33 universities holding Impact Acceleration Award funding from either the Economic and Social Research Council (ESRC) or the Engineering and Physical Sciences Research Council (EPSRC). There are opportunities for flexible working including both part-time and remote working.
The deadline for submitting an expression of interest to the Scheme is midnight on 4th September 2017.
For more information about the Academic Fellowship Scheme see: http://www.parliament.uk/mps-lords-and-offices/offices/bicameral/post/fellowships/parliamentary-academic-fellowship-scheme/
Submissions to the Law Commission’s consultation on ‘Official Data Protection’: Guardian News and Media
The Law Commission has invited interested parties to write submissions commenting on the proposals outlined in a consultation report on ‘official data protection’. The consultation period closed for submissions on 3 May, although some organisations have been given an extended deadline. (For more detailed background on the Law Commission’s work please see the first post in this series).
The Information Law and Policy Centre is re-publishing some of the submissions written by stakeholders and interested parties in response to the Law Commission’s consultation report (pdf) to our blog. In due course, we will collate the submissions on a single resource page. If you have written a submission for the consultation you would like (re)-published please contact us.
Please note that none of the published submissions reflect the views of the Information Law and Policy Centre which aims to promote and facilitate cross-disciplinary law and policy research, in collaboration with a variety of national and international institutions.
The fourteenth submission in our series is the response submitted by Guardian News and Media. The executive summary outlines that Guardian News and Media is “very concerned that the effect of the measures set out in the consultation paper (‘CP’) would be to make it easier for the government to severely limit the reporting of public interest stories”.
(Previous submissions published in this series: Open Rights Group, CFOI and Article 19, The Courage Foundation, Liberty, Public Concern at Work, The Institute of Employment Rights, Transparency International UK, National Union of Journalists, and English Pen, Reporters Without Borders and Index on Censorship, the Open Government Network, Lorna Woods, Lawrence McNamara and Judith Townend, Global Witness, and the British Computer Society.)
Submissions to the Law Commission’s consultation on ‘Official Data Protection’: British Computer Society
The Law Commission has invited interested parties to write submissions commenting on the proposals outlined in a consultation report on ‘official data protection’. The consultation period closed for submissions on 3 May, although some organisations have been given an extended deadline. (For more detailed background on the Law Commission’s work please see the first post in this series).
The Information Law and Policy Centre is re-publishing some of the submissions written by stakeholders and interested parties in response to the Law Commission’s consultation report (pdf) to our blog. In due course, we will collate the submissions on a single resource page. If you have written a submission for the consultation you would like (re)-published please contact us.
Please note that none of the published submissions reflect the views of the Information Law and Policy Centre which aims to promote and facilitate cross-disciplinary law and policy research, in collaboration with a variety of national and international institutions.
The thirteenth submission in our series is the response submitted by the British Computer Society.
(Previous submissions published in this series: Open Rights Group, CFOI and Article 19, The Courage Foundation, Liberty, Public Concern at Work, The Institute of Employment Rights, Transparency International UK, National Union of Journalists, and English Pen, Reporters Without Borders and Index on Censorship, the Open Government Network, Lorna Woods, Lawrence McNamara and Judith Townend, and Global Witness.)
Creating International Frameworks for Data Protection: New Handbook on Data Protection in Humanitarian Action
In this guest post, Dr Christopher Kuner and Massimo Marelli highlight the publication of the new ICRC/Brussels Privacy Hub handbook on data protection in humanitarian action. Christopher Kuner is professor of law and co-director of the Brussels Privacy Hub, Vrije Universiteit Brussel (VUB) and Massimo Marelli is Head of Data Protection Office at the International Committee of the Red Cross (ICRC). This post first appeared on EJIL: Talk!, the blog of the European Journal of International Law.
The collection and processing of personally-identifiable data is central to the work of both international organisations working in the humanitarian sector (IHOs) and non-governmental organisations (NGOs) in protecting and delivering essential aid to hundreds of millions of vulnerable individuals. With the increased adoption of new technologies in recent years, and the increased complexity of data flows and the growth in the number of stakeholders involved in the processing, there has been an increasing need for data protection guidelines that IHOs and NGOs can apply in their work. This was highlighted first in the 2013 report by Privacy International entitled: “Aiding Surveillance”, and was also recognised by the International Conference of Privacy and Data Protection Commissioners in its Resolution on Privacy and International Humanitarian Action adopted in Amsterdam in 2015 (Amsterdam Resolution).
This need has led to publication of the new Handbook on Data Protection in Humanitarian Action prepared jointly by the Data Protection Office of the International Committee of the Red Cross (ICRC) and the Brussels Privacy Hub, a research institute of the Vrije Universiteit Brussel (VUB) in Brussels. It has been drafted in consultation with stakeholders from the global data protection and international humanitarian communities, including IHOs and humanitarian practitioners, data protection authorities, academics, NGOs, and experts on relevant topics. The drafting committee for the Handbook also included the Swiss Data Protection Authority; the Office of the European Data Protection Supervisor (EDPS); the French-speaking Association of Data Protection Authorities (AFAPDP); the UN High Commissioner for Refugees (UNHCR); the International Organisation for Migration (IOM); and the International Federation of Red Cross and Red Crescent Societies (IFRC).
Content of the Handbook
The Handbook addresses questions of common concern in the application of data protection in international humanitarian action, and is addressed to staff of IHOs and NGOs who are involved in the processing of personal data, particularly those in charge of advising on and applying data protection standards. It is hoped that it may also prove useful to other parties, such as data protection authorities, private companies, and others involved in international humanitarian action.
Compliance with personal data protection standards requires consideration of the specific scope and purpose of humanitarian activities to provide for the urgent and basic needs of vulnerable individuals. Both data protection and humanitarian action have the dignity of the individual at their core. The Handbook thus regards data protection and international humanitarian action as compatible, complementary to, and supporting each other.
The Handbook recognizes that the right to the protection of personal data is not an absolute right, and should be considered in relation to the overall objective of protecting human life and dignity, and be balanced with other fundamental rights and freedoms, in accordance with the principle of proportionality. For example, it may be necessary to balance, on the one hand, data protection rights with, on the other hand, the objective of ensuring access to and security of victims of armed conflict and other situations of violence. This requires high levels of confidentiality and in certain circumstances limitations on data access rights, as well as historical and humanitarian accountability of stakeholders in humanitarian emergencies, which implies in some cases long-term retention of data. It also recognises, however, that due to the extreme and volatile environment in which humanitarian action often takes place, diligent application of data protection principles is key, and that non-compliance with data protection can have more severe consequences than in non-emergency settings.
The first part of the Handbook deals with basic principles of data protection and their application in the context of humanitarian action. This includes issues such as vulnerability of data subjects and implications for the identification of suitable legal bases for data processing, as well as the difficulties involved identifying clear-cut categories of sensitive data; emergencies and implications on the data protection rights of individuals; data controllers’ responsibilities such as data security, impact assessments, and accountability; and international data sharing.
The second part deals with the use of specific new technologies in the context of international humanitarian action. This includes data analytics and ‘Big Data’; drones and remote sensing; biometrics; cash transfer programming; cloud services; and mobile messaging apps. The Handbook also addresses the use of data protection impact assessments. Specific examples are included to assist readers in applying protection for data processing.
Creating international frameworks for data protection
Data protection is an area of human rights law which is of great and growing concern at an international level, but which lacks a firm, clear-cut legal basis in international law, beyond some important instruments such as the Council of Europe Convention 108.
As the UN General Assembly re-affirmed in its latest resolution on the right to privacy in the digital age issued on 19 December 2016, the right to privacy is protected in international human rights instruments such as the Universal Declaration of Human Rights and the International Covenant on Civil and Political Rights. However, as has been discussed previously in EJIL: Talk!, the current legal situation fails to present a clear international framework for data protection, which is closely related to privacy but not synonymous with it, as a separate right.
Just as was the case with the Data Protection Policy of the UN High Commissioner for Refugees (UNHCR) and the ICRC Rules on Personal Data Protection, both published in 2015, the Handbook was inspired by a wide variety of data protection instruments, without being based solely on any single one. These instruments include the 1990 United Nations General Assembly Guidelines for the Regulation of Computerized Personal Data Files adopted by GA Resolution 45/95; the OECD Guidelines (in their 2013 version); the 1981 Council of Europe Convention (Convention 108); and the 2009 Madrid Resolution. Other important instruments have also been taken into account, such as the 1995 EU Directive 95/46; the EU General Data Protection Regulation 2016/679 (GDPR); and the Amsterdam Resolution.
The Handbook’s starting point is the need recognised in the Amsterdam Resolution to provide data protection guidance in the humanitarian sector. It seeks to further the objective of the Amsterdam Resolution to meet the demand for co-operation in the development of guidance expressed by international humanitarian actors, taking into consideration the specificities of their actions and the need for these to be facilitated.
Privileges and immunities of International Organisations
As highlighted in the Amsterdam Resolution, the international community has entrusted specific tasks of a humanitarian nature to certain international organisations. The privileges and immunities they generally enjoy, notably immunity from jurisdiction in countries where they work, ensure they can perform their mandate in full independence. Accordingly, they are responsible for the processing of data according to their own rules, in line with international standards, and subject to the control of and enforcement by their own compliance systems across their work. This is in line with General Assembly Guidelines 1990 referred to above, which apply also to international organisations and require them to designate the authority statutorily competent to supervise the observance of the guidelines.
This consideration is key, since in humanitarian emergencies, the privileges and immunities of an IHO may be the first line of protection for the personal data of vulnerable individuals, particularly in the context of armed conflicts and other situations of violence. This was highlighted also in the Amsterdam Resolution, which states at page 3 that “Humanitarian organizations not benefiting from Privileges and Immunities may come under pressure to provide data collected for humanitarian purposes to authorities wishing to use such data for other purposes (for example control of migration flows and the fight against terrorism). The risk of misuse of data may have a serious impact on data protection rights of displaced persons and can be a detriment to their safety, as well as to humanitarian action more generally”. The Handbook also considers the important implications for the analysis of flows of personal data within, from, and to international organisations.
The Handbook is a response to the growing calls for increased guidance in the application of data protection principles in the area of humanitarian action. Data protection is particularly important in ensuring the application of the humanitarian principle of “do no harm” to new technologies adopted in a humanitarian context. Drafted with input from both communities and taking into account legal frameworks from around the world, the Handbook represents an important step in demonstrating how data protection and humanitarian action can complement each other and mutually strengthen their objective to further the dignity of vulnerable individuals.
In this guest post, Professor Sonia Livingstone, (London School of Economics and Political Science), assesses the evidence behind claims in the media that the internet is harming children and young people. Her article is relevant to the Information Law and Policy Centre’s annual conference coming up in November – Children and Digital Rights: Regulating Freedoms and Safeguards.
The news is constantly awash with stories reporting on – and arguably amplifying – public anxieties over youth and media. The anxieties concern violence and video games, gaming addiction, internet and mental health, and teen suicide.
For example, child psychologist Michael Carr-Gregg recently linked the sexualisation of children and their easy access to online pornography to an increase in sexual and indecent assault allegations at school.
His argument reprised some familiar problems that are common in media panic stories about the supposed loss of childhood innocence.Problems with the evidence
There are four common steps that are neatly illustrated by Carr-Gregg’s argument: the claim of a media cause, an outcome harmful to youth, evidence that these are causally linked, and a mediating factor that can make or break the causal link.
- Children are increasingly immersed in pervasive and damaging messages from the media (online, social and mainstream) that objectify women and legitimate sexual assault. The existence of such messages is not in doubt. But children’s immersion in them and their implied lack of critical media literacy is.
- Sexual assaults among school students are increasing. The increased reporting of such assaults is also not in doubt. However, it’s unclear whether this is a genuine increase in assaults or an increase in their reporting due to greater awareness.
- Exposure to pornography is causally responsible for the increase in sexual assaults among children. This is often the crucial missing link in such media accounts; there is simply no evidence cited to support this claim.
- Parents (and society) are unaware of and should be better prepared for the pervasive influence of sexualised media on their children. Again this is likely exaggerated, although not greatly in doubt. But whether it makes a difference to children’s vulnerability to damaging messages or to actual assault has not been established.
But, for each step, the evidence for media harm is insufficient.Research on children’s exposure to pornography
The conclusions of a recent detailed 20-year review of the research on children’s exposure to pornography were:
- Some adolescents – more often boys, “sensation seekers” and those with troubled family relations – tend to use pornography. This in turn is weakly linked to gender-stereotypical sexual beliefs that can be pejorative to women.
- There is a link between exposure to pornography and sexually aggressive behaviours in boys. But, for girls, pornography use is related to experiences of sexual victimisation.
- However, because of various “methodological and theoretical shortcomings”, the claim of causality cannot be considered conclusive.
These findings echo those from a recent meta-analysis, which found that sexting behaviour was positively related to sexual activity, unprotected sex and one’s number of sexual partners. However, the relationship was weak to moderate.
In general, research is clearer that online pornography can be problematic as an experience for adolescents rather than as a cause of sexually violent behaviour.
For instance, a 2016 UK study found that children report a range of negative emotions after watching pornography. On first exposure, children express shock, upset and confusion. They seem to become desensitised to the content over time.
Also complicating matters is the importance of allowing for adolescents’ right to express and explore their sexuality both online and offline, as well as the finding that one reason they seek out pornography is that society provides little else in terms of needed materials for sexual education. But some have made a great start.What, then, should be done?
The evidence in support of effective public interventions is as limited as evidence of the harm these are designed to alleviate.
- In a recent report, my colleagues and I proposed a series of possible legislative and industry strategies. Several have potential to reduce harm without unduly restricting either adults’ or children’s online freedoms.
- In another report, we focused on the importance of better digital literacy and sexual education in schools, as well as constructive awareness-raising and support for parents.
- In the 2017 report by the House of Lords, the focus was on improving the co-ordination of strategies across society, along with learning from the evaluation of what works and, more radically, introducing ethics-by-design into the processes of content and technological production to improve children’s online experiences in the first place.
But if a mix of thoughtful strategies is to be implemented, tested, refined and co-ordinated, we need an open environment in which policy is led by evidence rather than media panic. We must also become critical readers of popular claims about media harm.
In terms of identifying causes, we should ask why the finger of blame is always pointed at the media rather than other likely causes (including violence against women, or problems linked to growing inequality or precarity).
While such doubts have validity, it would also seem implausible to claim that the unprecedented advent of internet and social media use on a mass scale in Western cultures has had no consequences for children, positive or negative. The challenge is to ensure these consequences benefit children and the wider society.
The Information Law and Policy Centre is looking for an early career researcher to play a pivotal role in the development of the Information Law and Policy Centre (ILPC).
The post holder will work collaboratively with the Director of the Centre on projects focusing on the promotion and facilitation of research on Law and Policy, from identifying research opportunities to preparing applications for research grants. The successful candidate will also provide teaching support by mentoring or co-supervising MPhil/PhD students in Law and cognate areas.
To be considered for this role you will have completed your PhD (or have submitted when commencing this post) in a relevant discipline (e.g. law, sociology, political science) and have a keen research interest in legal and policy issues. You will be expected to demonstrate well-developed academic writing and editing skills, paired with excellent verbal communication skills to suit a range of audiences. It is essential that you have the ability to work independently as well as collaborate with colleagues across the Institute and University.
This role is 17.5 hours per week on a fixed term contract for 12 months in the first instance.
Please see the full job description on jobs.ac.uk for further details on the role and to apply.
The development of the Internet has facilitated global communications, new online spaces for the exchange of goods and information, new currencies and online marketplaces, and unprecedented access to information. These new possibilities in ‘cyberspace’ have been exploited for criminal activity and the rising challenge of various forms of ‘cybercrime’ in recent years has been well-documented.
As part of our cyber security and cybercrime seminar series at the Information Law and Policy Centre (ILPC) for 2017, lead speaker Dr Monique Mann explored the new challenges posed for policing and law enforcement by cybercrime and dissected the legal conundrums and human rights considerations raised by criminal activity which crosses international jurisdictions. The panel was also comprised of expert discussant, Professor Ian Walden (Queen Mary University of London), and was chaired by the ILPC’s Director, Dr Nóra Ni Loideain.
Mann’s current research – alongside her colleagues at the Queensland University of Technology and Deakin University – concerns the ‘legal geographies of digital technologies’. Her talk considered three case studies which formed the basis of broader conclusions in relation to the use of extraterritorial legal powers by states (particularly the United States) and the issues raised by extradition processes which have become prominent in several high profile hacking cases.
The Silk Road
Mann’s talk began with an analysis of the FBI’s investigation into the Silk Road – an illicit online marketplace trading drugs and other illegal items operated through the anonymity afforded by the Tor network. Mann stated that the equivalent of $1.2 billion in the cryptocurrency, Bitcoin, was exchanged by Silk Road users during the site’s operation between 2011 and 2013. She highlighted that the FBI’s investigation and attempts to prosecute the leaders of the site were dependent on a range of extraterritorial legal activities.
First, warrants to investigate the online activities of the suspects were issued only after the FBI had already managed to access information from a server in Iceland. It is not clear from public documents how the FBI gained access to this server. Moreover, the warrants – which were also relevant to individual citizens based outside the United States – were granted on the authority of a single US judge.
Secondly, in order to demonstrate conspiracy under the Continuing Criminal Enterprise Act, the FBI sought to access communications between the chief suspect in the case, Ross William Ulbricht, and co-offenders based in Ireland and Australia. This included an attempt by the FBI to access email content from Microsoft servers based in Ireland using a Mutual Legal Assistance Treaty (MLAT) request. Microsoft fought the request and the most recent ruling on this issue has designated the request as an impermissible extraterritorial search.
Finally, the FBI sought to extradite Irish-based suspect, Gary Davis, to the United States in order to face trial for his involvement in the Silk Road site. Taken together, the FBI’s investigative techniques in relation to the Silk Road site raise significant questions around the processes and outcomes of extraterritorial legal activities.
Gary Davis’ case was the catalyst for the team to investigate extradition in greater detail as it is has become a central, if exceptional, feature of transnational justice cooperation. Mann and her colleagues have reviewed a number of high profile cases of citizens facing extradition including Davis, Gary McKinnon and Laurie Love. In the past, extradition has primarily been used as a tool to return a suspected criminal to his or her home country after he or she has fled. In the digital age, however, extradition is increasingly being used in cyber crime cases to extradite suspected criminals to a country they may never have even visited as the nature of transnational online offending means their crime effectively takes place in a different location to where they are physically based.
Courts have three options on being presented with an extradition request from another jurisdiction: accept the request and relocate the offender to face trial in the prosecuting country; deny the request altogether; or shift the prosecution to the ‘source of harm’ – i.e. the offender’s location.
Mann pointed out that in the cases of Gary McKinnon and Laurie Love, extradition requests from the United States have triggered protracted legal cases lasting many years as the defendants have (variously) argued that the extradition request infringes their Article 3, 6 and/or 8 rights under the European Convention of Human Rights. The cases have also hinged on the defendants’ physical and mental well-being, particularly in relation to Autistic Spectrum Disorders (emerging research suggests there is a link between online offending and ASDs).
The difficulties and legal complexities of these extradition cases, as well as a concern for the human rights of those involved, led the researchers to question whether it would not be better to shift the legal forum to the source – i.e. to the defendant’s home country.
Attendees at the ILPC seminar, however, highlighted that there are significant obstacles both in terms of cost and willingness to share evidence. It was argued, for example, that the UK was probably not willing to finance McKinnon’s trial here, nor would the US be interested in sharing sensitive information relating to the 73,000 US government computers – including NASA and military facilities – that McKinnon had hacked from his home computer.
Bulk Hacking and Child Exploitation Material
The final feature of extraterritorial law enforcement that Mann highlighted was the use of bulk hacking. These ‘watering hole’ or ‘honeypot’ operations have involved the FBI taking over an illegal website, moving it to a government server, continuing to operate the site, and then using it as a base to hack unsuspecting users.
In the Playpen example which Mann cited, the US government infected more than 8,000 computers in over 120 countries with a single warrant making it the largest known extraterritorial hacking operation. The investigation into Playpen – a site for the exchange of child exploitation material – has sparked 124 cases involving 17 defendants. One of the central legal questions here has been whether such activities constitute a “search” of the site’s users or whether they constitute online tracking.
Defendants have also attempted to argue that the US government has engaged in outrageous conduct in continuing to operate the Playpen website pointing out that during 2 weeks of operation the US government will have distributed 22,000 images of child exploitation material. Although the court in the case argued that the US government did not create the crimes committed, Mann nevertheless raised the question as to whether the ends do justify these means.
Implications and Issues
For Mann, the Silk Road, extradition and bulk hacking case studies focus attention on the role of the United States in the transnational jurisdictional sphere. How far has policing in the context of cybercrime become ‘Americanised’ and at the behest of US agendas (such as the war on drugs)? And what does US law enforcement activity mean for understandings of ‘ownership’ of the internet?
Addressing these points, the panel’s discussant, Professor Ian Walden, a leading expert in information and communications law, stated that the United States’ access to investigative and legal resources will continue to mean it is ‘an important player’ in the prosecution of transnational cybercrime. He also argued that greater efforts at resolving legal conflict and a focus on international cooperation will be required as crime increasingly traverses international boundaries and as jurisdictional claims of countries concurrently expands.
Walden was hopeful that international cooperation could be improved through international aid to raise standards of criminal and procedural law, and he acknowledged that in particularly serious cyber crime offences, such as child exploitation material, there is some harmonisation.
He was not convinced, however, that in the near future there would be any advance in international agreements on cooperation beyond the Council of Europe’s 2001 Convention on Cybercrime. Differing national agendas and legal standards, he said, also create difficulties for international cooperation and legal harmonisation. Walden noted that Kenyan parliamentarians, for example, regard the main ‘cybercrime’ issue as the use of Facebook to accuse them of corruption – an issue which is of little concern in other parts of the world; while in Nigeria, cybercrimes can lead to the death penalty – a sanction that would be unacceptable to many other legal jurisdictions and not a solid foundation for cooperation.
In conclusion, the panel observed that British and European law has also so far held up and blocked the extraditions of Gary McKinnon and Laurie Love to the United States in the ‘interests of justice’. As a consequence of these and similar obstacles to transnational cooperation, it is likely that jurisdictional clashes in these transnational cybercrime cases will become more commonplace – particularly if the scope for cybercrime increases with the ongoing spread of the internet and new communication technologies.
And perhaps, paradoxically, it might be the case that out of these clashes, new methods, techniques and agreements on transnational policing and law enforcement will have to emerge.
Daniel Bennett, Research Assistant, Information Law and Policy Centre