Information Law & Policy Centre Blog
19 Feb 2018, 17:30 to 19 Feb 2018, 19:30
Institute of Advanced Legal Studies, 17 Russell Square, London WC1B 5DR
Personal Data as an Asset: Design and Incentive Alignments in a Personal Data Economy Description of Presentation: Despite the World Economic Forum (2011) report on personal data becoming an asset class the cost of transacting on personal data is becoming increasingly high with regulatory risks, societal disapproval, legal complexity and privacy concerns. Professor Irene Ng contends that this is because personal data as an asset is currently controlled by organisations. As a co-produced asset, the person has not had the technological capability to control and process his or her own data or indeed, data in general. Hence, legal and economic structures have been created only around Organisation-controlled personal data (OPD). This presentation will argue that a person-controlled personal data (PPD), technologically, legally and economically architected such that the individual owns a personal micro-server and therefore have full rights to the data within, much like owning a PC or a smartphone, is potentially a route to reducing transaction costs and innovating in the personal data economy. I will present the design and incentive alignments of stakeholders on the HAT hub-of-all-things platform (https://hubofallthings.com).
Professor Irene Ng, University of Warwick
Professor Irene Ng is the Director of the International Institute for Product and Service Innovation and the Professor of Marketing and Service Systems at WMG, University of Warwick. She is also the Chairman of the Hub-of-all-Things (HAT) Foundation Group (http://hubofallthings.com). A market design economist, Professor Ng is an advisor to large organisations, startups and governments on design of markets, economic and business models in the digital economy. Personal website http://ireneng.com
Dr Nora Ni Loideain, Director and Lecturer in Law, Information Law & Policy Centre, IALS
Wine reception to follow.
Artificial intelligence can already predict the future. Police forces are using it to map when and where crime is likely to occur. Doctors can use it to predict when a patient is most likely to have a heart attack or stroke. Researchers are even trying to give AI imagination so it can plan for unexpected consequences.
Many decisions in our lives require a good forecast, and AI agents are almost always better at forecasting than their human counterparts. Yet for all these technological advances, we still seem to deeply lack confidence in AI predictions. Recent cases show that people don’t like relying on AI and prefer to trust human experts, even if these experts are wrong.
If we want AI to really benefit people, we need to find a way to get people to trust it. To do that, we need to understand why people are so reluctant to trust AI in the first place.Should you trust Dr. Robot?
IBM’s attempt to promote its supercomputer programme to cancer doctors (Watson for Onology) was a PR disaster. The AI promised to deliver top-quality recommendations on the treatment of 12 cancers that accounted for 80% of the world’s cases. As of today, over 14,000 patients worldwide have received advice based on its calculations.
But when doctors first interacted with Watson they found themselves in a rather difficult situation. On the one hand, if Watson provided guidance about a treatment that coincided with their own opinions, physicians did not see much value in Watson’s recommendations. The supercomputer was simply telling them what they already know, and these recommendations did not change the actual treatment. This may have given doctors some peace of mind, providing them with more confidence in their own decisions. But IBM has yet to provide evidence that Watson actually improves cancer survival rates.
On the other hand, if Watson generated a recommendation that contradicted the experts’ opinion, doctors would typically conclude that Watson wasn’t competent. And the machine wouldn’t be able to explain why its treatment was plausible because its machine learning algorithms were simply too complex to be fully understood by humans. Consequently, this has caused even more mistrust and disbelief, leading many doctors to ignore the seemingly outlandish AI recommendations and stick to their own expertise.
As a result, IBM Watson’s premier medical partner, the MD Anderson Cancer Center, recently announced it was dropping the programme. Similarly, a Danish hospital reportedly abandoned the AI programme after discovering that its cancer doctors disagreed with Watson in over two thirds of cases.
The problem with Watson for Oncology was that doctors simply didn’t trust it. Human trust is often based on our understanding of how other people think and having experience of their reliability. This helps create a psychological feeling of safety. AI, on the other hand, is still fairly new and unfamiliar to most people. It makes decisions using a complex system of analysis to identify potentially hidden patterns and weak signals from large amounts of data.
Even if it can be technically explained (and that’s not always the case), AI’s decision-making process is usually too difficult for most people to understand. And interacting with something we don’t understand can cause anxiety and make us feel like we’re losing control. Many people are also simply not familiar with many instances of AI actually working, because it often happens in the background.
Instead, they are acutely aware of instances where AI goes wrong: a Google algorithm that classifies people of colour as gorillas; a Microsoft chatbot that decides to become a white supremacist in less than a day; a Tesla car operating in autopilot mode that resulted in a fatal accident. These unfortunate examples have received a disproportionate amount of media attention, emphasising the message that we cannot rely on technology. Machine learning is not foolproof, in part because the humans who design it aren’t.A new AI divide in society?
Feelings about AI also run deep. My colleagues and I recently ran an experiment where we asked people from a range of backgrounds to watch various sci-fi films about AI and then asked them questions about automation in everyday life. We found that, regardless of whether the film they watched depicted AI in a positive or negative light, simply watching a cinematic vision of our technological future polarised the participants’ attitudes. Optimists became more extreme in their enthusiasm for AI and sceptics became even more guarded.
This suggests people use relevant evidence about AI in a biased manner to support their existing attitudes, a deep-rooted human tendency known as confirmation bias. As AI is reported and represented more and more in the media, it could contribute to a deeply divided society, split between those who benefit from AI and those who reject it. More pertinently, refusing to accept the advantages offered by AI could place a large group of people at a serious disadvantage.Three ways out of the AI trust crisis
Fortunately we already have some ideas about how to improve trust in AI. Simply having previous experience with AI can significantly improve people’s attitudes towards the technology, as we found in our study. Similar evidence also suggests the more you use other technologies such as the internet, the more you trust them.
Another solution may be to open the “black-box” of machine learning algorithms and be more transparent about how they work. Companies such as Google, Airbnb and Twitter already release transparency reports about government requests and surveillance disclosures. A similar practice for AI systems could help people have a better understanding of algorithmic decisions are made.
Research suggests involving people more in the AI decision-making process could also improve trust and allow the AI to learn from human experience. For example,one study showed people were given the freedom to slightly modify an algorithm felt more satisfied with its decisions, more likely to believe it was superior and more likely to use it in the future.
We don’t need to understand the intricate inner workings of AI systems, but if people are given at least a bit of information about and control over how they are implemented, they will be more open to accepting AI into their lives.
In this guest post, Marion Oswald offers her homage to Yes Minister and, in that tradition, smuggles in some pertinent observations on AI fears. This post first appeared on the SCL website’s Blog as part of Laurence Eastham’s Predictions 2018 series. It is also appearing in Computers & Law, December/January issue.
Humphrey, I want to do something about predictions.
Yes Humphrey, the machines are taking over.
Are they Minister?
Yes Humphrey, my advisers tell me I should be up in arms. Machines – ‘AI’ they call it – predicting what I’m going to buy, when I’m going to die, even if I’ll commit a crime.
Surely not, Minister.
Not me personally, of course, Humphrey – other people. And then there’s this scandal over Cambridge Analytica and voter profiling. Has no-one heard of the secret ballot?
Everyone knows which way you would vote, Minister.
Yes, yes, not me personally, of course, Humphrey – other people. Anyway, I want to do something about it.
Of course, Minister. Let me see – you want to ban voter and customer profiling, crime risk assessment and predictions of one’s demise, so that would mean no more targeted advertising, political campaigning, predictive policing, early parole releases, life insurance policies…
Well, let’s not be too hasty Humphrey. I didn’t say anything about banning things.
My sincere apologies Minister, I had understood you wanted to do something.
Yes, Humphrey, about the machines, the AI. People don’t like the idea of some faceless computer snooping into their lives and making predictions about them.
But it’s alright if a human does it.
Yes…well no…I don’t know. What do you suggest Humphrey?
As I see it Minister, you have two problems.
The people are the ones with the votes, the AI developers are the ones with the money and the important clients – insurance companies, social media giants, dare I say it, even political parties..
Yes, yes, I see. I mustn’t alienate the money. But I must be seen to be doing something Humphrey.
I have two suggestions Minister. First, everything must be ‘transparent’. Organisations using AI must say how their technology works and what data it uses. Information, information everywhere…
I like it Humphrey. Power to the people and all that. And if they’ve had the information, they can’t complain, eh. And the second thing?
A Commission, Minister, or a Committee, with eminent members, debating, assessing, scrutinising, evaluating, appraising…
And what is this Commission to do?
What will the Commission do about predictions and AI?
It will scrutinise, Minister, it will evaluate, appraise and assess, and then, in two or three years, it will report.
But what will it say Humphrey?
I cannot possibly predict what the Commission on Predictions would say, being a mere humble servant of the Crown.
But if I had to guess, I think it highly likely that it will say that context reigns supreme – there are good predictions and there are bad predictions, and there is good AI and there is bad AI.
So after three years of talking, all it will say is that ‘it depends’.
In homage to ‘Yes Minister’ by Antony Jay and Jonathan Lynn
Marion Oswald, Senior Fellow in Law, Head of the Centre for Information Rights, University of Winchester
The Fifth Interdisciplinary Winchester Conference on Trust, Risk, Information and the Law will be held on Wednesday 25 April 2018 at the Holiday Inn, Winchester UK. Our overall theme for this conference will be: Public Law, Politics and the Constitution: A new battleground between the Law and Technology? The call for papers and booking information can be found at https://journals.winchesteruniversitypress.org/index.php/jirpp/pages/view/TRIL
In this guest post, Yijun Yu, Senior Lecturer, Department of Computing and Communications, The Open University examines the world’s top websites and their routine tracking of a user’s every keystroke, mouse movement and input into a web form – even if it’s later deleted.
Hundreds of the world’s top websites routinely track a user’s every keystroke, mouse movement and input into a web form – even before it’s submitted or later abandoned, according to the results of a study from researchers at Princeton University.
And there’s a nasty side-effect: personal identifiable data, such as medical information, passwords and credit card details, could be revealed when users surf the web – without them knowing that companies are monitoring their browsing behaviour. It’s a situation that should alarm anyone who cares about their privacy.
The Princeton researchers found it was difficult to redact personally identifiable information from browsing behaviour records – even, in some instances, when users have switched on privacy settings such as Do Not Track.
The research found that third party tracking services are used by hundreds of businesses to monitor how users navigate their websites. This is proving to be increasingly challenging as more and more companies beef-up security and shift their sites over to encrypted HTTPS pages.
To work around this, session-replay scripts are deployed to monitor user interface behaviour on websites as a sequence of time-stamped events, such as keyboard and mouse movements. Each of these events record additional parameters – indicating the keystrokes (for keyboard events) and screen coordinates (for mouse movement events) – at the time of interaction. When associated with the content of a website and web address, this recorded sequence of events can be exactly replayed by another browser that triggers the functions defined by the website.
What this means is that a third person is able to see, for example, a user entering a password into an online form – which is a clear privacy breach. Websites that employ third party analytics firms to record and replay such behaviour is, they argue, in the name of “enhancing user experience”. The more they know what their users are after, the easier it is to provide them with targeted information.
While it’s not news that companies are monitoring our behaviour as we surf the web, the fact that scripts are quietly being deployed to record individual browser sessions in this way has concerned the study’s co-author, Steven Englehardt, who is a PhD candidate at Princeton.A website user replay demo in action.
“Collection of page content by third-party replay scripts may cause sensitive information, such as medical conditions, credit card details, and other personal information displayed on a page, to leak to the third-party as part of the recording,” he wrote. “This may expose users to identity theft, online scams and other unwanted behaviour. The same is true for the collection of user inputs during checkout and registration processes.”
Websites logging keystrokes has been an issue known for a while to cybersecurity experts. And Princeton’s empirical study raises valid concerns about users having little or no control over their surfing behaviour being recorded in this way.
So it’s important to help users control how their information is shared online. But there are increasing signs of usability trumping security measures that are designed to keep our data safe online.Usability vs security
Password managers are used by millions of people to help them easily keep a record of different passwords for different sites. The user of such a service only needs to memorise one key password.
Recently, a group of researchers at the University of Derby and the Open University discovered that the offline clients of password manager services were at risk of exposing the main key password when stored as plain text in memory that could be sniffed or dumped by whole system attacks.
User experience is not an excuse for tolerating security flaws.
The Information Law and Policy Centre’s Annual Conference 2017 – Children and Digital Rights: Regulating Freedoms and Safeguards
In this guest post Lorna Woods, Professor of Internet Law at the University of Essex, provides an analysis on the new ECJ opinion . This post first appeared on the blog of Steve Peers, Professor of EU, Human Rights and World Trade Law at the University of Essex.
Who is responsible for data protection law compliance on Facebook fan sites? That issue is analysed in a recent opinion of an ECJ Advocate-General, in the case of Wirtschaftsakademie (full title: Unabhängiges Landeszentrum für Datenschutz Schleswig-Holstein v Wirtschaftsakademie Schleswig-Holstein GmbH, in the presence of Facebook Ireland Ltd, Vertreter des Bundesinteresses beim Bundesverwaltungsgericht).
This case is one more in a line of cases dealing specifically with the jurisdiction of national data protection supervisory authorities, a line of reasoning which seems to operate separately from the Brussels I Recast Regulation, which concerns jurisdiction of courts over civil and commercial disputes. While this is an Advocate-General’s opinion, and therefore not binding on the Court, if followed by the Court it would consolidates the Court’s prior broad interpretation of the Data Protection Directive. While this might be the headline, it is worth considering a perhaps overlooked element of the data-economy: the role of the content provider in providing individuals whose data is harvested.
Wirtschaftsakademie set up a ‘fan page’ on Facebook. The data protection authority in Schleswig-Holstein sought the deactivation of the fan page on the basis that visitors to the fan page were not warned that their personal data would be collected by the by means of cookies placed on the visitor’s hard disk. The purpose of that data collection was twofold: to compile viewing statistics for the administrator of the fan page; and to enable Facebook to target advertisements at each visitor by tracking the visitors’ web browsing habits, otherwise known as behavioural advertising. Such activity must comply with the Data Protection Directive (DPD) (as implemented in the various Member States). While the content attracting visitors was that of Wirtshaftsakademie, it relied on Facebook for data collection and analysis. It is here that a number of preliminary questions arise:
- Who is the controller for the purposes of the data protection regime;
- Which is the applicable national law; and
- The scope of the national supervisory authority’s regulatory competence?
The referring court had assumed that Wirtschaftsakademie was not a controller as it had no influence, in law or in fact, over the manner in which the personal data was processed by Facebook, and the fact that Wirtschaftsakademie had recourse to analytical tools for its own purposes does not change this [para 28]. Advocate General Bot, however, disagreed with this assessment, arguing that Wirtschaftsakademie was a joint controller for the purposes of the DPD – a possibility for which Article 2(d) DPD makes explicit provision (paras 42, 51, 52]. The Advocate General accepted that while the system was designed by Facebook so as to facilitate a data-driven business model and Wirtschaftsakademie was principally a user of the social network [para 53]. The Advocate General highlighted that without the participation of Wirtschaftsakademie the data processing in respect of the visitors to Wirtschaftsakademie could not occur; and he could end that processing by closing the relevant fan page down. In sum:
Inasmuch as he agrees to the means and purposes of the processing of personal data, as predefined by Facebook, a fan page administrator must be regarded as having participated in the determination of those means and purposes. [para 56]
Advocate General Bot further suggested that the use of the various filters included in the analytical tools provided meant that the user had a direct impact on how data was processed by Facebook. To similar effect, a user can also seek to reach specific audiences, as defined by the user. As a result, the user has a controlling role in the acquisition phase of data processing by Facebook. The Advocate General rejected an formal analysis based on the terms of the contract concluded by the User and Facebook [para 60] and the fact that the user may be presented with ‘take it or leave it’ terms, does not affect the fact that the user may be a controller.
As a final point, the Advocate General referred to the risk of data protection rules being circumvented, arguing that:
had the Wirtschaftsakademie created a website elsewhere than on Facebook and implemented a tool similar to ‘Facebook Insights’ in order to compile viewing statistics, it would be regarded as the controller of the processing needed to compile those statistics [para 65].
A similar approach should be taken in relation to social media plug ins (such as Facebook’s like button), which allow Facebook to gather data on third party websites without the end-user’s consent (see Case C-40/17 Fashion ID, pending).
Having recognised that joint responsibility was an important factor in ensuring the protection of rights, the Advocate General – referring to the approach of the Article 29 Working Party on data protection – clarified that this did not mean that both parties would have equal responsibility, but rather their respective responsibility would vary depending on their involvement at the various stages of processing activities.
Facebook is established outside the EU, but it has a number of EU established subsidiaries: the subsidiary which has responsibility for data protection is established in Ireland, while the other subsidiaries have responsibility for the sale of advertising. This raises a number of questions: can the German supervisory authority exercise its powers and if so, against which subsidiary?
Applicable law is dealt with in Article 4 DPD, which refers to the competence of the Member State where the controller is established but which also envisages the possibility, in the case of a non-EU parent company, of multiple establishments. The issue comes down to the interpretation of the phrase from Art. 4(1)(a), ‘in the context of the activities of an establishment’, which according to Weltimmo cannot be interpreted restrictively [para 87]. The Advocate General determined that there were two criteria [para 88]:
- An establishment within the relevant Member State; and
- Processing in connection with that establishment.
Relying on Weltimmo and Verein für Konsumenteninformation the Advocate General identified factors – which are based on the general freedom of establishment approach to the question of establishment looking for real activity through stable arrangements – the approach is not formalistic. Facebook Germany clearly satisfies these tests.
Referring to Article 29 Working Party Opinion 8/2010, the Advocate General re-iterated that in relation to the second criterion, it is context not location that is important. In Google Spain, the Court of Justice linked the selling of advertising (in Spain) to the processing of data (in the US) to hold that the processing was carried out in the context of the Spanish subsidiary given the economic nexus between the processing and the advertising revenue. The business set up for Facebook here is the same, and the fact that there is an Irish office does not change the fact that the data processing takes place in the context of the German subsidiary. The DPD does not introduce a one-stop shop; to the contrary, a deliberate choice was made to allow the application of multiple national legal systems (see Rec 19 DPD), and this approach is supported by the judgment in Verein für Konsumenteninformation in relation to Amazon. The system will change with the entry into force of the General Data Protection Regulation (GDPR), but the Advocate General proposed that the Court should not pre-empt the entry into force of that legislation (due May 2018) in its interpretation, as the cooperation mechanism on which it depends is not yet in place [para 103].
By contrast to Weltimmo, where the supervisory authority was seeking to impose a fine on a company established in another Member State, here the supervisory authority would be imposing German law on a German company. There is a question, however, as to the addressee of any enforcement measure. On one interpretation, the German regulator should have the power only to direct compliance on the company established on its territory, even though that might not be effective. Alternatively, the DPD could be interpreted so as to allow the German regulator to direct compliance from Facebook Ireland. Looking at the fundamental role of controllers, Advocate General Bot suggested that this was the preferred solution. Article 28(1), (3) and (6) DPD entitle the supervisory authority of the Member State in which the establishment of the controller is located, by contrast to the position in Weltimmo, to exercise its powers of intervention without being required first to call on the supervisory authority of the Member State in which the controller is located to exercise its powers.
The novelty in this Opinion relates to the first question is significant because the business model espoused by social media companies depends on the participation of those providing content, who seem at the moment to take little responsibility for their actions. The price paid by third parties (in terms of data) is facilitated by them, allowing them to avoid or minimise their business costs. Should there be a consistency of enforcement applications against such users, this may gradually have an effect on the underlying platform’s business model. While it is harder to regulate mice than elephants, at least these mice appear to be clearly within the geographic jurisdiction of the German regulator – and will remain so even when the GDPR is in force.
The Advocate General went out of his way to explain that there was no difference between the situation in issue here and that in the other relevant pending case, Case C-40/17 Fashion ID. This case concerns the choice by a website provider to embed third party code allowing the collection of data in respect of visitors in the programming for the website for its own ends (increased visibility of and thus traffic to the website): the code in question is that underpinning the Facebook ‘like’ button, but would also presumably include similar codes from Twitter or Instagram.
If there was any doubt from cases – for example Weltimmo – about whether there is a one-stop shop (ie only one possible supervisory authority with jurisdiction across the EU) in the Data Protection Directive, the Advocate General expressly refutes this point. In this context, it seems that this case adds little new, rather elaborating points of detail based on the precise factual set-up of Facebook operations in the EU. It seems well-established now that – at least under the DPD – clever multinational corporate structures cannot funnel data protection compliance through a chosen national regime.
It may be worth noting also the broad approach of the Advocate General to Google Spain when determining whether processing is in the context of activities. There the Court observed that:
‘in such circumstances, the activities of the operator of the search engine and those of its establishment situated in the Member State concerned are inextricably linked since the activities relating to the advertising space constitute the means of rendering the search engine at issue economically profitable and that engine is, at the same time, the means enabling those activities to be performed [Google Spain, para 56]
Here, the Advocate General focussed on the fact that social networks such as Facebook generate much of their revenue from advertisements posted on the web pages set up and accessed by users and that there is therefore an indissoluble link between the two activities. Thus it seems that the Google Spain reasoning applies broadly to many free services paid for by user data, even if third parties – for example those providing the content on the page visited – are involved too.
Of course, the GDPR does introduce a one-stop shop. Arguably therefore these cases are of soon to be historic interest only. The GDPR proposes that the regulator in respect of the controller’s main EU establishment should have lead responsibility for regulation, with regulators in respect of other Member States being ‘concerned authorities’. There are two points to note: first, there is a system in place to facilitate the cooperation of the relevant supervisory authorities Art 60), including possible recourse to a ‘consistency mechanism’ (Art 63 et seq); secondly, the competence of the lead authority to act in relation to cross-border processing in Article 66 operates without prejudice to the competence of each national supervisory authority in its own territory set out in Article 55. The first of these two points concerns the attempt to limit regulatory arbitrage and a downward spiral of standards in the GDPR as applied and the broad approach to establishment. The interest of the recipient state in regulating means that there may be many cases involving ‘concerned authorities’. The precise implications of the second point are not clear; note however that it seems that the one-stop shop as regards Facebook would not stop data protection authorities taking enforcement action against users such as Wirtschaftsakademie.
In this guest post, Faith Gordon, University of Westminster explores how, under UK law, a child’s anonimity is not entirely guaranteed. Faith is speaking at the Information Law and Policy Centre’s annual conference – Children and Digital Rights: Regulating Freedoms and Safeguards this Friday, 17 November.
Under the 1948 Universal Declaration of Human Rights, each individual is presumed innocent until proven guilty. A big part of protecting this principle is guaranteeing that public opinion is not biased against someone that is about to be tried in the courts. In this situation, minors are particularly vulnerable and need all the protection that can be legally offered. So when you read stories about cases involving children, it’s often accompanied with the line that the accused cannot be named for legal reasons.
However, a loophole exists: a minor can be named before being formally charged. And as we all know in this digital age, being named comes with consequences – details or images shared of the child are permanent. While the right to be forgotten is the strongest for children within the Data Protection Bill, children and young people know that when their images and posts are screenshot they have little or no control over how they are used and who has access to them.
Should a child or young person come into conflict with the law, Section 44 of the Youth Justice and Criminal Evidence Act 1999 could offer pre-charge protection for them as minors, but it has never been enacted.
The latest consideration of this issue was during debates in the House of Lords in July 2014 and October 2014. It was decided that the aims of Section 44 could be achieved by protections from media regulatory bodies. But given that, in reality, regulatory bodies and their codes of practice don’t adequately offer protection to minors pre-charge, the government’s failure to enact this section of the law is arguably contrary to Article 8 of the European Convention on Human Rights, which is the right to respect for private and family life.Once you’re named …
This failure is now exposing a young 15-year-old child. Private details about him were published in print media, online and by other individuals on social media, after he was questioned by the Police Service of Northern Ireland in respect of the alleged TalkTalk hacking incident.
This alleged hacking has been described as one of the largest and most public contemporary incidents of cybercrime in the UK. And legal proceedings in the High Court were required to ensure that organisations, such as Google and Twitter, removed the child’s details from their platforms and to also restrain further publication of the child’s details. But despite injunctions being issued, internet searches are still revealing details about the identity of the 15-year-old.
The attempt to remedy the issue of this child’s identification online highlights the problem of dealing with online permanency. Once the horse has bolted it’s hard to get it back in.
This issue has arisen in a range of high profile cases where children and young people have been accused of involvement in crime. One example is the murder of Ann Maguire, a teacher in Leeds who was murdered in 2014.
When the incident was first reported, many of the newspapers published various details about the accused 15-year-old, including information about where he lived and his family upbringing. The Sun newspaper “outed” the 15-year-old by printing his name.
Allowing the media free rein to name a child before they are charged can later prejudice the fairness of their trial if it proceeds to court. This is what occurred in the case of Jon Venables and Robert Thompson, two ten-year-old boys who were convicted of the murder of a two-year-old. Their lawyers claimed that media reporting had undermined the chances of a fair trial and this had breached their rights. The European Court of Human Rights in its judgment in 1999 ruled that the boys did not receive a fair trial.
While the Northern Ireland judiciary states that there is protection through media regulatory guidelines, my research demonstrates that the revised IPSO Code of Practice – which came into force in January 2016 – fails to provide crucial advice to journalists on the use of social media and online content.
I have called for a clear set of enforceable guidelines for the media, which states that children’s and young people’s social media imagery and comments should not be reprinted or published without their fully informed consent and that all decision making should reflect children’s best interests.Consequences
This form of publishing details is a form of naming and shaming, which can encourage or stir up anger, resentment and retaliation in communities. In today’s media hungry world, the chase reveal as much as possible – but it is worrying especially when this naming is done before charge and uses a loophole.
Children who are already vulnerable are placed at further risk. Research I have conducted over the past ten years clearly demonstrates the significance of negative media representations on children and young people, and their manifestation in punishment attacks, beatings and exiling from their communities.
As a youth advocate who works with young people said during an interview with me in 2015: “Really in the society we live in you are guilty until proven innocent … basically people are looking at them [young people] and going ‘criminal’ … it is not right.” Several youth workers I also interviewed stated that releasing details or imagery of children “could damage their health, well-being and future job prospects” and they discussed examples of how identification in the media “led to them getting shot or a beating” in communities.
A report by the Standing Committee for Youth Justice – an alliance of organisations aiming to improve the youth justice system in England and Wales – proposed that in the digital age a legal ban on publishing children’s details at any time during their contact with the legal system is the only safeguard.
It is clear that legislators, policymakers and the media regulatory bodies need to keep up with advances in online and social media practices to ensure that children’s rights are not being breached. Addressing this loophole in the legislation is one step that is urgently required because media regulatory bodies currently lack clarity and suitable ethical guidelines on this issue.
The gap within the criminal justice legislative framework needs to urgently be addressed. Unless it is, there could be further case examples of children who may not go on to be charged but have their details published, shared, disseminated and permanently accessible via a basic internet search.
In this guest post Dr Daniel R. Thomas, University of Cambridge reviews research surrounding ethical issues in research using datasets of illicit origin. This post first appeared on “Light Blue Touchpaper” weblog written by researchers in the Security Group at the University of Cambridge Computer Laboratory.
On Friday at IMC I presented our paper “Ethical issues in research using datasets of illicit origin” by Daniel R. Thomas, Sergio Pastrana, Alice Hutchings, Richard Clayton, and Alastair R. Beresford. We conducted this research after thinking about some of these issues in the context of our previous work on UDP reflection DDoS attacks.
Data of illicit origin is data obtained by illicit means such as exploiting a vulnerability or unauthorized disclosure, in our previous work this was leaked databases from booter services. We analysed existing guidance on ethics and papers that used data of illicit origin to see what issues researchers are encouraged to discuss and what issues they did discuss. We find wide variation in current practice. We encourage researchers using data of illicit origin to include an ethics section in their paper: to explain why the work was ethical so that the research community can learn from the work. At present in many cases positive benefits as well as potential harms of research, remain entirely unidentified. Few papers record explicit Research Ethics Board (REB) (aka IRB/Ethics Commitee) approval for the activity that is described and the justifications given for exemption from REB approval suggest deficiencies in the REB process. It is also important to focus on the “human participants” of research rather than the narrower “human subjects” definition as not all the humans that might be harmed by research are its direct subjects.
In this guest post, Claire Bessant, Northumbria University, Newcastle, looks into the phenomenon of ‘sharenting’. Her article is relevant to the Information Law and Policy Centre’s annual conference coming up in November – Children and Digital Rights: Regulating Freedoms and Safeguards.
A toddler with birthday cake smeared across his face, grins delightedly at his mother. Minutes later, the image appears on Facebook. A not uncommon scenario – 42% of UK parents share photos of their children online with half of these parents sharing photos at least once a month.
Welcome to the world of “sharenting” – where more than 80% of children are said to have an online presence by the age of two. This is a world where the average parent shares almost 1,500 images of their child online before their fifth birthday.
But while a recent report from OFCOM confirms many parents do share images of their children online, the report also indicates that more than half (56%) of parents don’t. Most of these non-sharenting parents (87%) actively choose not to do so to protect their children’s private lives.Over sharing
Parents often have good reasons for sharenting. It allows them to find and share parenting advice, to obtain emotional and practical support, and to maintain contact with relatives and friends.
Increasingly, though, concerns are being raised about “oversharenting” – when parents share too much, or, share inappropriate information. Sharenting can result in the identification of a child’s home, childcare or play location or the disclosure of identifying information which could pose risks to the child.
While many sharenters says they are conscious of the potential impact of their actions, and they consider their children’s views before sharenting, a recent House of Lords report on the matter suggests not all parents do. The “growing up with the internet” report reveals some parents share information they know will embarrass their children – and some never consider their children’s interests before they post.
A recent survey for CBBC Newsround also warns that a quarter of children who’ve had their photographs sharented have been embarrassed or worried by these actions.Think of the kids
Police in France and Germany have taken concrete steps to address sharenting concerns. They have posted Facebook warnings, telling parents of the dangers of sharenting, and stressing the importance of protecting children’s private lives.
Back in the UK, some academics have suggested the government should educate parents to ensure they understand the importance of protecting their child’s digital identity. But should the “nanny state” really be interfering in family life by telling parents how and when they can share their children’s information?
It’s clearly a tricky area to regulate, but it could be that the government’s recently published data protection bill may provide at least a partial answer.
In its 2017 manifesto, the Conservative party pledged to:
Give people new rights to ensure they are in control of their own data, including the ability to require major social media platforms to delete information.
In the recent Queen’s Speech, the government confirmed its commitment to reforming data protection law. And in August, it published a statement of intent providing more detail of its proposed reforms. In relation to the so-called “right to be forgotten” or “right to erasure”, the government states that:
Individuals will be able to ask for their personal data to be erased.
Users will also be able to ask social media platforms to delete information they posted during their childhood. In certain circumstances, social media companies will be required to delete any or all of a user’s posts. The statement explains:
For example, a post on social media made as a child would normally be deleted upon request, subject to very narrow exemptions.
The primary purpose of the data protection bill is to bring the new EU General Data Protection Regulation into UK law. This is to ensure UK law continues to accord with European data protection law post-Brexit – which is essential if UK companies are to continue to trade with their European counterparts.
It could also provide a solution for children whose parents like to sharent, because the new laws specify that an individual or organisation must obtain explicit consent or have some other legitimate basis to share an individual’s personal data. In real terms, this means that before a parent shares their child’s information online they should ask whether the child agrees.
Of course, this doesn’t mean parents are suddenly going to start asking for their children’s consent to sharent. But if a parent doesn’t obtain their child’s consent, or the child decides in the future that they are no longer happy for that sharented information to be online, the bill also provides another possible solution. Children could use the “right to erasure” to ask for social network providers and other websites to remove sharented information. Not perhaps a perfect answer, but for now it’s one way to put a stop to those embarrassing mugshots ending up in cyberspace for years to come.
The 5th interdisciplinary Conference on Trust, Risk, Information & the Law will be held on 25 April 2018 at the Holiday Inn, Winchester UK. Our overall theme for this conference will be: “Public Law, Politics and the Constitution: A new battleground between the Law and Technology?”
Our keynote speaker will be Jamie Bartlett, Director of the Centre for the Analysis of Social Media for Demos in conjunction with the University of Sussex, and author of several books including ‘Radicals’ and ‘The Dark Net’.
Papers are welcomed on any aspect of the conference theme. This might include although is not restricted to:
- Fake news: definition, consequences, responsibilities and liabilities;
- The use of Big Data in political campaigning;
- Social media ‘echo chambers’ and political campaigning;
- Digital threats and impact on the political process;
- The Dark Net and consequences for the State and the Constitution;
- Big Tech – the new States and how to regulate them;
- The use of algorithmic tools and Big Data by the public sector;
- Tackling terrorist propaganda and digital communications within Constitutional values;
- Technology neutral legislation;
- Threats to individual privacy and public law solutions;
- Online courts and holding the State to account.
Proposals for workshops are also welcome.
The full call for papers and workshops can be found at https://journals.winchesteruniversitypress.org/index.php/jirpp/pages/view/TRIL.
Deadline for submissions is 26 January 2018.
In this guest post, Professor of Law and Innovation at Queen’s University Belfast Daithí Mac Síthigh reviews the recent Information Law and Policy Centre seminar that explored Internet intermediaries and their legal role and obligations.
Taking stock of recent developments concerning the liability and duties associated with being an Internet intermediary (especially the provision of hosting and social media services) was the theme of a recent event at the Information Law and Policy Centre. In my presentation, starting from about 20 years ago, I reviewed the early statutory interventions, including the broad protection against liability contained in US law (and the narrower shield in respect of intellectual property!), and the conditional provisions adopted by the European Union in Directive 2000/31/EC (E-Commerce Directive), alongside developments in specific areas, such as defamation. The most recent 10 years, though, have seen a trend towards specific solutions for one area of law or another (what I called ‘fragmentation’ in 2013), as well as a growing body of caselaw on liability, injunctions, and the like (both from the Court of Justice of the EU and the domestic courts).
So in 2017, what do we see? I argued that if there ever were a consensus on what intermediaries should or should not be expected to do, it is certainly no longer the case. From the new provisions of the Digital Economy Act 2017 creating a statutory requirement for ISPs to block access to websites not compliant with the new UK rules on age verification for sexually explicit material, to the proposed changes to the Audiovisual Media Services Directive that would create new requirements for video sharing platforms, to the Law Commission’s recommendations on contempt of court and temporary removal of material in order to ensure fair proceedings, new requirements or at least the idea of tweaking the obligations are popping up here and there. This is also seen through the frequent exhortations to service providers, especially social media platforms, to do more about harassment, ‘terrorist’ material, and the like. As the Home Secretary put it in her speech to the Conservative party conference last week, she calls on internet companies ‘to bring forward technology solutions to rid […] platforms of this vile terrorist material that plays such a key role in radicalisation. Act now. Honour your moral obligations.’ Meanwhile, the European Commission’s latest intervention, a Communication on ‘tackling illegal content online’ promotes a ‘more aligned approach [to removing illegal content, which] would make the fight against illegal content more effective’ and ‘reduce the cost of compliance’ – yet at this stage lacks clarity on how to handle divergence in legality between member states, the interaction with liability issues, and human rights issues (including the emerging jurisprudence of the ECtHR on the topic).
The Economist summarised developments in 2017 as being a ‘global techlash’, while Warby J’s perceptive speech on media law pointed to the increased complexity of media law, ‘mainly, though not entirely’ as a result of legislative change. I called for a broader review of intermediary law in the UK (perhaps led by the Law Commissions in Scotland and England and Wales and the appropriate authorities in Northern Ireland), which would take a horizontal approach (i.e. encompassing multiple causes of action), address questions of power (though heeding Orla Lynskey’s caution that power in this context is not soley market power), considers liability, duties, and knock-on effects together (rather than the artificial separation of maintaining immunity while adding new burdens), and responds to Brexit.
Prof. Lorna Woods summarised the growing concerns about blanket models, emphasising a shift towards ‘procedural responsibility’ in systems such as the DEA. She highlighted the uncertainty about the status of the ECD’s no general obligation to monitor clause (article 15), which was never transposed into a specific provision in the UK, and the potential interaction between the proposed AVMSD amendments and UK-specific actions. James Michael framed the issue as influenced by a struggle between legal approaches and the behaviour of technological companies, and wondered whether an international approach (perhaps in the spirit of the OECD’s approach to data protection) would be more fruitful. Further discussion with an engaged audience included the interaction between the status of data controller and the provisions on intermediaries, the role of industry self-regulation, emerging questions of international trade law and harmonisation, and developments elsewhere e.g. injunctions against search engines in Canada.
Professor Daithí Mac Síthigh
20 Nov 2017, 17:30 to 20 Nov 2017, 19:30
Institute of Advanced Legal Studies, 17 Russell Square, London WC1B 5DR
As part of the University of London’s Being Human Festival, the Information Law and Policy Centre will be hosting a film and discussion panel evening at the Institute of Advanced Legal Studies.
One of the Centre’s key aims is to promote public engagement by bringing together academic experts, policy-makers, industry, artists, and key civil society stakeholders (such as NGOs, journalists) to discuss issues and ideas concerning information law and policy relevant to the public interest that will capture the public’s imagination.
This event will focus on the implications posed by the increasingly significant role of artificial intelligence (AI) in society and the possible ways in which humans will co-exist with AI in future, particularly the impact that this interaction will have on our liberty, privacy, and agency. Will the benefits of AI only be achieved at the expense of these human rights and values? Do current laws, ethics, or technologies offer any guidance with respect to how we should navigate this future society?
The primary purpose of this event is to particularly encourage engagement and interest from young adults (15-18 years) in considering the implications for democracy, civil liberties, and human rights posed by the increasing role of AI in society that affect their everyday decision-making as humans and citizens. A limited number of places for this event will also be available to the general public.
Confirmed speakers include:
Chair: Dr Nora Ni Loideain, Director and Lecturer in Law, Information Law and Policy Centre, University of London
- Dr Hamed Haddadi, Associate Professor at the Faculty of Engineering, Imperial College London and lead researcher of The Human-Data Interaction Project
- Hal Hodson, Technology Journalist at The Economist
- Professor John Naughton, Project Leader of the Technology and Democracy Project, University of Cambridge and columnist for The Observer
- Renate Samson, Chief Executive of leading human rights organisation Big Brother Watch
BOOKING: This event is free but advance booking is required.Book now
Readers of the Information and Law Policy Centre blog are invited to submit a call for papers for the Global Fake News and Defamation Symposium on the theme of ‘Fake News and Weaponized Defamation: Global Perspectives’Concept Note:
The notion of “fake news” has gained great currency in global popular culture in the wake of contentious social-media imbued elections in the United States and Europe. Although often associated with the rise of extremist voices in political discourse and, specifically, an agenda to “deconstruct” the power of government, institutional media, and the scientific establishment, fake news is “new wine in old bottles,” a phenomenon that has long historical roots in government propaganda, jingoistic newspapers, and business-controlled public relations. In some countries, dissemination of “fake news” is a crime that is used to stifle dissent. This broad conception of fake news not only acts to repress evidence-based inquiry of government, scientists, and the press; but it also diminishes the power of populations to seek informed consensus on policies such as climate change, healthcare, race and gender equality, religious tolerance, national security, drug abuse, poverty, homophobia, and government corruption, among others.
“Weaponized defamation” refers to the increasing invocation, and increasing use, of defamation and privacy torts by people in power to threaten press investigations, despite laws protecting responsible or non-reckless reporting. In the United States, for example, some politicians, including the current president, invoke defamation as both a sword and shield. Armed with legal power that individuals—and most news organizations—cannot match, politicians and celebrities, wealthy or backed by the wealth of others, can threaten press watchdogs with resource-sapping litigation; at the same time, some leaders appear to leverage their “lawyered-up” legal teams to make knowingly false attacks—or recklessly repeat the false attacks of others—with impunity.
Papers should have an international or comparative focus that engages historical, contemporary or emerging issues relating to fake news or “weaponized defamation.” All papers submitted will be fully refereed by a minimum of two specialized referees. Before final acceptance, all referee comments must be considered.
- Accepted papers will be peer reviewed and distributed during the conference to all attendees.
- Authors are given an opportunity to briefly present their papers at the conference.
- Accepted papers will be published in the Journal of International Media and Entertainment Law, the Southwestern Law Review, or the Southwestern Journal of International Law.
- Authors whose papers are accepted for publication will be provided with round-trip domestic or international travel (subject to caps) to Los Angeles, California, hotel accommodations, and complimentary conference registration.
Completed paper deadline: January 5, 2018 Submit an Abstract
The Journal of International Media & Entertainment Law is a faculty-edited journal published by the Donald E. Biederman Entertainment and Media Law Institute at Southwestern Law School, in cooperation with the American Bar Association’s Forum on Communications Law, and the ABA’s Forum on the Entertainment and Sports Industries.
In this guest post, Harry T Dyer, University of East Anglia, looks into the complicated relationship between social media and young people. His article is relevant to the Information Law and Policy Centre’s annual conference coming up in November – Children and Digital Rights: Regulating Freedoms and Safeguards.
Facebook’s latest attempt to appeal to teens has quietly closed its doors. The social media platform’s Lifestage app (so unsuccessful that this is probably the first time you’ve heard of it) was launched a little under a year ago to resounding apathy and has struggled ever since.
Yet, as is Silicon Valley’s way, Facebook has rapidly followed the failure of one venture with the launch of another one by unveiling a new video streaming service. Facebook Watch will host series of live and pre-recorded short-form videos, including some original, professionally made content, in a move that will allow the platform to more directly compete with the likes of YouTube, Netflix and traditional TV channels.
Lifestage was just one of a long series of attempts by Facebook to stem the tide of young people increasingly interacting across multiple platforms. With Watch, the company seems to have changed tack from this focus on retaining young people, instead targeting a much wider user base. Perhaps Facebook has learnt that it will simply never be cool –, but that doesn’t mean it still can’t be popular.
Lifestage was intended to compete with the increasingly popular Snapchat, the photo and video-sharing app especially popular among teenagers. But the spin-off was never able to achieve the user numbers necessary to sustain the venture. Worryingly for Facebook, this is the third failed attempt to emulate Snapchat’s success among teens, following the short-lived Facebook Poke and Facebook Slingshot, which also came to quiet and unceremonious ends. Facebook has also incorporated several of Snapchat’s features such as its Stories function directly into its main app, to a lukewarm reception.
This comes as the social media market continues to expand rapidly. Competition is fierce and numerous established companies are vying with start-ups and rising brands to catch the attention of a growing and increasingly connected user base.
No longer do one or two companies hold a monopoly on the social media landscape. Most teenagers are increasingly using more than one platform for their online interactions (though noticeably this trend does appear to be somewhat different outside the Western world). Young people are experimenting with new formats and ways of interacting, from short videos and disappearing messages, to anonymous feedback apps such as Sarahah, the latest craze to explode in popularity and excite media commentators.I don’t want my mum to see this.
Yet despite these issues, Facebook is still the world’s most popular social media platform by quite some distance and has more than 2 billion users worldwide. Recent data suggests it is almost as popular as Snapchat among teens and young users, as is Facebook’s other photo-sharing app, Instagram.
The problem, of course, is that Facebook’s popularity – and, crucially, the platform’s simplistic and user-friendly design – means teenagers’ parents, teachers, bosses and even grandparents now also use the platform. For teens, that means the platform has become a headache of competing and conflicting social obligations, with various aspects and contexts of their life collapsing into a single space.
The young people I talk to for my research suggest that Facebook’s broad appeal and easy design presents a unique experience for them. Facebook is a field of potential social landmines, with the fear that the diverse user base will see everything they post – causing anxiety, hedging and inaction.
Having to negotiate this broad audience means young people seem to be less likely to use of some of the public aspects of Facebook, choosing instead to rely on aspects such as groups and private messaging. This explains why they seem to be increasingly relying more on platforms such as Instagram and Snapchat to interact with their peers, a trend also noted by other researchers.
In this light, the attempt to encourage teenagers to use the same features as they do on Snapchat when Facebook’s brand is so associated with a more public and socially difficult environment seems inherently flawed. We can’t say where the company will go in the future but it seems likely it will struggle to ever be as central to young people’s online social experiences as it once was.Watch targets a wider audience
Yet the launch of Facebook Watch suggests perhaps the company has learnt its lesson. The new service is an attempt to create a broader space that can appeal to their wide user base, rather than aiming content, ideas and spaces specifically at teens and young people.
With the announcement of the video-sharing service, Facebook has put out a call for “community orientated” original shows. It will provide users with video recommendations based on what others – and in particular their friends – are watching. In this way, Facebook Watch will allow users to find content that reflects their interests and friendships, whoever they are. Rather than attempting to retain and target a specific demographic, Facebook Watch appears to be acknowledging the platform’s broader appeal.
This seems to match Facebook’s moves away from being a pure social networking platform and towards a much broader one-stop hub for news and content. With the launch of Watch, users need never leave the walled garden of Facebook as they can view both content embedded from around the web and original videos hosted on the site. And with Facebook already ranked second only to YouTube for online video content, again this move looks like an attempt to cater to a much broader market than teens alone.
The fact that Facebook seems increasingly keen to nurture its more diverse user base is likely to be a continuing concern for young people worried about their interactions on the platform. But, on the other hand, given YouTube’s massive appeal to the teen market, Watch may serve as a way to entice teens back to Facebook. Really, there’s only one way to sum up young people’s relationship with Facebook: it’s complicated.
In this guest post, Dr Natalia Kucirkova, UCL and Professor Sonia Livingstone, (London School of Economics and Political Science), explore ‘screen time’ as an outdated term and why we need to recognise the power of learning through screen-based technologies. Their article is relevant to the Information Law and Policy Centre’s annual conference coming up in November – Children and Digital Rights: Regulating Freedoms and Safeguards.
The idea of “screen time” causes arguments – but not just between children and their anxious parents. The Children’s Commissioner for England, Anne Longfield, recently compared overuse of social media to junk food and urged parents to regulate screen time using her “Digital 5 A Day” campaign.
This prompted the former director of Britain’s electronic surveillance agency, GCHQ, to respond by telling parents to increase screen time for children so they can gain skills to “save the country”, since the UK is “desperately” short of engineers and computer scientists.
Meanwhile, parents are left in the middle, trying to make sense of it all.
But the term “screen time” is problematic to begin with. A screen can refer to an iPad used to Skype their grandparents, a Kindle for reading poetry, a television for playing video games, or a desktop computer for their homework. Most screens are now multifunctional, so unless we specify the content, context and connections involved in particular screen time activities, any discussion will be muddled.
Measuring technology usage in terms of quantity rather than quality is also difficult. Children spend time on multiple devices in multiple places, sometimes in short bursts, sometimes constantly connected. Calculating the incalculable puts unnecessary pressure on parents, who end up looking at the clock rather than their children.
The Digital 5 A Day campaign has five key messages, covering areas like privacy, physical activity and creativity. Its focus on constructive activities and attitudes towards technology is a good start. Likewise, a key recommendation of the LSE Media Policy Project report was for more positive messaging about children’s technology use.
Technology use is complex and takes time to understand. Content matters. Context matters. Connections matter. Children’s age and capacity matters. Reducing this intricate mix to a simple digital five-a-day runs the risk of losing all the nutrients. Just like the NHS’s Five Fruit and Veg A Day Campaign, future studies will no doubt announce that five ought to be doubled to ten.
Another problem will come from attempts to interpret the digital five-a-day as a quality indicator. Commercial producers often use government campaigns to drive sales and interest in their products. If a so-called “educational” app claims that it “supports creative and active engagement”, parents might buy it – but there will be little guarantee that it will offer a great experience. It is an unregulated and confusing market – although help is currently provided by organisations providing evidence-based recommendations such as the NSPCC, National Literacy Trust, Connect Safely, Parent Zone, and the BBC’s CBeebies.
The constant flow of panicky media headlines don’t help parents or improve the level of public discussion. The trouble is that there’s too little delving into the whys and wherefores behind each story, nor much independent examination of the evidence that might (or might not) support the claims being publicised. Luckily, some bodies, such as the Science Media Centre, do try to act as responsible intermediaries.
Let your kids spend more time online to 'save the country', says ex-GCHQ chief https://t.co/NjWPYNPIM6
— Sky News (@SkyNews) August 8, 2017
When it comes to young people and technology, it’s vital to widen the lens – away from a close focus on time spent, to the reality of people’s lives. Today’s children grow up in increasingly stressed, tired and rushed modern families. Technology commentators often revert to food metaphors to call for a balanced diet or even an occasional digital detox, and that’s fine to a degree.
But they can be taken too far, especially when the underlying harms are contested by science. “One-size-fits-all” solutions don’t work when they are taken too literally, or when they become yet another reason to blame parents (or children), or because they don’t allow for the diverse conditions of real people’s lives.
If there is a food metaphor that works for technology, it’s that we should all try some humble pie when it comes to telling others how to live. “Screen time” is an outdated and misguided shorthand for all the different ways of interacting, creating and learning through screen-based technologies. It’s time to drop it.
In this guest post, Vladlena Benson, Kingston University, assesses the need to encourage conscious social media use among the young. Her article is relevant to the Information Law and Policy Centre’s annual conference coming up in November – Children and Digital Rights: Regulating Freedoms and Safeguards.
Teenagers in Britain are fortunate to have access to computers, laptops and smartphones from an early age. A child in the UK receives a smartphone at around the age of 12 – among the earliest in Europe. The natural consequence of this is that children spend a significant amount of their time on the internet. Nearly 20 years or so since the first social networks appeared on the internet, there has been considerable research into their psychological, societal, and health effects. While these have often been seen as largely negative over the years, there is plenty of evidence to the contrary.
A recent report from the Education Policy Institute, for example, studied children’s use of the internet and their mental health. The report found that teenagers value social networks as a way of connecting with friends and family, maintaining their networks of friends, and long distance connections. Teenagers see social networking as a comfortable medium for sharing their issues and finding solutions to problems such as social isolation and loneliness. They are also more likely to seek help in areas such as health advice, unknown experiences, and help with exams and study techniques.
Social networks afford the opportunity to find people with similar interests, or to support teamwork in school projects. In unsettled economic and political times, teenagers use social networks as a means to be heard and to get involved in political activism, as well as volunteering and charitable activities.
Teenagers also leverage social networks to engage with creative projects, and many young artists are first noticed through the exposure offered by the rich networking opportunities of social media, such as musicians on MySpace or photographers on image sharing sites Flickr or Behance. Teenagers looking to pursue careers in art or other creative industries turn to social platforms in order to create their portfolios as well as to create with others.
These opportunities have a positive impact on adolescent character formation and the development of their individual identity, and helps them toward choosing a career path. These choices are made at an early age and to this end social networks are enriching young people’s lives.Risks not to be ignored
On the other hand the report was able to list a substantial list of negative influences stemming from social media use, ranging from time wasting and addictive, compulsive use, to cyber-bullying, radicalisation, stress and sexual grooming to name just a few.
Unsurprisingly governments are concerned with the impact of social networking on the vulnerable. Concern over the uncontrolled nature of social networking has prompted action from parents and politicians. The issue of children roaming freely on social networks became an issue in the recent UK general election, and was mentioned in the Conservative party manifesto, which made a key pledge of “safety for children online, and new rights to require social media companies to delete information about young people as they turn 18”. This is a tall order, as it would require erasing tens of millions of teenagers’ profiles on around 20 different social platforms, hosted in different countries worldwide.
The Conservatives also suggested the party would “create a power in law for government to introduce an industry-wide levy from social media companies and communication service providers to support awareness and preventative activity to counter internet harms”. Awareness-raising is an important step towards encouraging conscious social media use among the young. But despite continuing efforts to educate youngsters about the dangers (and, to be fair, the benefits) of using social media, many are wary of the impact technology may have on overly-social teenagers once outside parental control.
It has been shown that teenagers increasingly use social networks in private, leaving parents outside environments where children are exposed to real-time content and largely unguarded instant communications. The concern raised in the report that “responses to protect, and build resilience in, young people are inadequate and often outdated” is timely. While schools are tasked with educating teenagers about the risks of social media, very few parents are able to effectively introduce controls on the content their children access and monitor the evolving threats that operate online.Speak their language
A recent study of compulsive social media use showed that it is not the user’s age that matters, but their individual motivations. In fact users who are highly sociable and driven by friends towards compulsive social media use suffer physically and socially. On the other hand when users are driven by hedonic (fun-seeking) motivations, their physical health and sociability improves. This explains why teenagers in the UK see social networking as a positive phenomenon that enriches their social life. There is clearly potential to harness these positives.
While the tech giants that run the social networks with billions of users must play their part to ensure the safety of their youngest users, it is also parents’ role to talk openly with their children about their use of social networks and demand expected standards of use. Teenagers have questions about life and are looking for answers to their problems as they go through a challenging time of life. With the prime minister naming “mental health as a key priority” schools, parents, politicians and social networking platforms should help teenagers to build resilience to what they encounter online and how it makes them feel, rather than adopting only a safeguarding approach. It’s interesting to note that 78% of young people who contact the organisation Childline now do so online: teachers, family and friends providing support should make the most of a medium which today’s children and teenagers are comfortable with.
Readers of the Information and Law Policy Centre blog are invited to participate in the second, full-day International Law for the Sustainable Development Goals Workshop at the Department of International Law, University of Groningen, NL.
Our aim with the second track of this one-day Workshop is to explore the right to science’s potential value in the context of technology & knowledge transfer and sustainable development. More specifically, we aim to discuss the role of the right to science as (a) a means to implement the SDGs and related human rights; (b) an enabler of international cooperation regarding technology and knowledge sharing; and (c) a stand-alone human right and the respective obligations of States in enhancing systemic policy and institutional coherence and informing policy development and coordination.
Please find the the detailed Call for Papers available here.
We invite abstract proposals from interested scholars from all disciplines. Proposals should not exceed 500 words in length. Please send your proposals as an attachment to Marlies Hesselman (email@example.com) for Track 1 and to Mando Rachovitsa (firstname.lastname@example.org) for Track 2. The deadline for abstracts is 15 September 2017. All proposals will undergo peer review and notifications of acceptance will be sent out by 20 September 2017.
Draft papers are expected to be delivered by the 15 November 2017 for circulation among participants. We plan to pursue the publication of a special issue as a result of this Workshop.
The Workshop is scheduled to take place on the 24th November 2017 at the University of Groningen and it is part of the 2017-2018 Workshop Series “International Law for the Sustainable Development Goals”.
Information Law and Policy Centre’s Annual Conference 2017 – Children and Digital Rights: Regulating Freedoms and Safeguards
Date 17 Nov 2017, 09:30 to 17 Nov 2017, 17:30
Venue Institute of Advanced Legal Studies, 17 Russell Square, London WC1B 5DR
The Internet provides children with more freedom to communicate, learn, create, share, and engage with society than ever before. For instance, research by Ofcom in 2016 found that 72% of young teenagers (twelve to fifteen) in the UK have social media accounts which are often used for homework groups. 20% of the same group have made their own digital music and 30% have used the Internet for civic engagement by signing online petitions, or sharing and talking online about the news.
Interacting within this connected digital world, however, also presents a number of challenges to ensuring the adequate protection of a child’s rights to privacy, freedom of expression, and safety, both online and offline. These risks range from children being unable to identify advertisements on search engines to bullying in online chat groups. Children may also be targeted via social media platforms with methods (such as fake online identities or manipulated photos/images) specifically designed to harm them or exploit their particular vulnerabilities and naivety.
At the ILPC’s Annual Conference, regulators, practitioners, civil society, and leading academic experts will address and examine the key legal frameworks and policies being used and developed to safeguard these freedoms and rights. These legislative and policy regimes will include the UN Convention on the Rights of the Child, and the related provisions (such as consent, transparency, and profiling) under the UK Digital Charter, and the Data Protection Bill which will implement the EU General Data Protection Regulation.
The ILPC’s Annual Conference and Lecture will take place on Friday 17th November 2017, followed by an evening reception.
Attendance will be free of charge thanks to the support of the IALS and our sponsors, although registration is required as places are limited.
Key speakers, chairs, and discussants at the Annual Conference will provide a range of national and international legal insights and perspectives from the UK, Israel, Australia, and Europe, and will include:
- Baroness Beeban Kidron OBE, Film-maker, Member of The Royal Foundation Taskforce on the Prevention of Cyberbullying, and Founder of 5Rights
- Anna Morgan, Head of Legal, Deputy Data Protection Commissioner of Ireland;
- Lisa Atkinson, Group Manager on Policy Engagement, Information Commissioner’s Office;
- Renate Samson, Chief Executive of Big Brother Watch;
- Graham Smith, Bird & Bird LLP, Solicitor and leading expert in UK Internet Law.
The best papers from the conference’s plenary sessions and panels will be featured in a special issue of Bloomsbury’s Communications Law journal, following a peer-review process. Those giving papers will be invited to submit full draft papers to the journal by 1st November 2017 for consideration by the journal’s editorial team.
About the Information Law and Policy Centre at the IALS:
The Information Law and Policy Centre (ILPC) produces, promotes, and facilitates research about the law and policy of information and data, and the ways in which law both restricts and enables the sharing, and dissemination, of different types of information.
The ILPC is part of the Institute of Advanced Legal Studies (IALS), which was founded in 1947. It was conceived, and is funded, as a national academic institution, attached to the University of London, serving all universities through its national legal research library. Its function is to promote, facilitate, and disseminate the results of advanced study and research in the discipline of law, for the benefit of persons and institutions in the UK and abroad.
The ILPC’s Annual Conference and Annual Lecture form part of a series of events celebrating the 70th Anniversary of IALS in November.
About Communications Law (Journal of Computer, Media and Telecommunications Law):
Communications Law is a well-respected quarterly journal published by Bloomsbury Professional covering the broad spectrum of legal issues arising in the telecoms, IT, and media industries. Each issue brings you a wide range of opinion, discussion, and analysis from the field of communications law. Dr Paul Wragg, Associate Professor of Law at the University of Leeds, is the journal’s Editor in Chief.
In this guest post Lorna Woods, Professor of Internet Law at the University of Essex, explores the EU’s proposed Passenger Name Record (PNR) agreement with Canada. This post first appeared on the blog of Steve Peers, Professor of EU, Human Rights and World Trade Law at the University of Essex.
Opinion 1/15 EU/Canada PNR Agreement, 26th July 2017Facts
Canadian law required airlines, in the interests of the fight against serious crime and terrorism, to provide certain information about passengers (API/PNR data), which obligation required airlines under EU data protection regulations to transfer data to outside the EU. The PNR data includes the names of air passengers, the dates of intended travel, the travel itinerary, and information relating to payment and baggage. The PNR data may reveal travel habits, relationships between two individuals, information on the financial situation or the dietary habits of individuals. To regularise the transfer of data, and to support police cooperation, the EU negotiated an agreement with Canada specifying the data to be transferred, the purposes for which the data could be used, as well as some processing safeguard provisions (e.g. use of sensitive data, security obligations, oversight requirements, access by passengers). The data was permitted to be retained for five years, albeit in a depersonalised form. Further disclosure of the data beyond Canada and the Member States was permitted in limited circumstances. The European Parliament requested an opinion from the Court of Justice under Article 218(11) TFEU as to whether the agreement satisfied fundamental human rights standards and whether the appropriate Treaty base had been used for the agreement.Opinion
The Court noted that the agreement fell within the EU’s constitutional framework, and must therefore comply with its constitutional principles, including (though this point was not made express), respect for fundamental human rights (whether as a general principle or by virtue of the EU Charter – the EUCFR).
After dealing with questions of admissibility, the Court addressed the question of appropriate Treaty base. It re-stated existing principles (elaborated, for example, in Case C‑263/14 Parliament v Council, judgment 14 June 2016, EU:C:2016:435) with regard to choice of Treaty base generally: the choice must rest on objective factors (including the aim and the content of that measure) which are amenable to judicial review. In this context the Court found that the proposed agreement has two objectives: safeguarding public security; and safeguarding personal data [opinion, para 90]. The Court concluded that the two objectives were inextricably linked: while the driver for the need to PNR data was protection of public security, the transfer of data would be lawful only if data protection rules were respected [para 94]. Therefore, the agreement should be based on both Article 16(2) (data protection) and Article 87(2)(a) TFEU (police cooperation). It held, however, that Article 82(1)(d) TFEU (judicial cooperation) could not be used, partly because judicial authorities were not included in the agreement.
Looking at the issue of data protection, the Court re-stated the question as being ‘on the compatibility of the envisaged agreement with, in particular, the right to respect for private life and the right to the protection of personal data’ [para 119]. It then commented that although both Article 16 TFEU and Article 8 EUCFR enshrine the right to data protection, in its analysis it would refer to Article 8 only, because that provision lays down in a more specific manner the conditions for data processing. The agreement refers to the processing of data concerning identified individuals, and therefore may affect the fundamental right to respect for private life guaranteed in Article 7 EUCFR as well as the right to protection to personal data in Article 8 EUCFR. The Court re-iterated a number of principles regarding the scope of the right to private life:
‘the communication of personal data to a third party, such as a public authority, constitutes an interference with the fundamental right enshrined in Article 7 of the Charter, whatever the subsequent use of the information communicated. The same is true of the retention of personal data and access to that data with a view to its use by public authorities. In this connection, it does not matter whether the information in question relating to private life is sensitive or whether the persons concerned have been inconvenienced in any way on account of that interference’ [para 124].
The transfer of PNR data and its retention and any use constituted an interference with both Article 7 [para 125] and Article 8 EUCFR [para 126]. In assessing the seriousness of the interference, the Court flagged ‘the systematic and continuous’ nature of the PNR system, the insight into private life of individuals, the fact that the system is used as an intelligence tool and the length of time for which the data is available.
Interferences with these rights may be justified. Nonetheless, there are constraints on any justification: Article 8(2) of the EU Charter specifies that processing must be ‘for specified purposes and on the basis of the consent of the person concerned or some other legitimate basis laid down by law’; and, according to Article 52(1) of the EU Charter, any limitation must be provided for by law and respect the essence of those rights and freedoms. Further, limitations must be necessary and genuinely meet objectives of general interest recognised by the Union or the need to protect the rights and freedoms of others.
Following WebMindLicenses (Case C‑419/14, judgment of 17 December 2015, EU:C:2015:832, para 81), the law that permits the interference should also set down the extent of that interference. Proportionality requires that any derogation from and limitation on the protection of personal data should apply only insofar as is strictly necessary. To this end and to prevent the risk of abuse, the legislation must set down ‘clear and precise rules governing the scope and application of the measure in question and imposing minimum safeguards’, specifically ‘indicat[ing] in what circumstances and under which conditions a measure providing for the processing of such data may be adopted’ [para 141], especially when automated processing is involved.
The Court considered whether there was a legitimate basis for the processing, noting that although passengers may be said to consent to the processing of PNR data, this consent related to a different purpose. The transfer of the PNR data is not conditional on the specific consent of the passengers and must therefore be grounded on some other basis, within the terms of Article 8(2) EUCFR. The Court rejected the Parliament’s submission that the meaning of ‘law’ be restricted to ‘legislative act’ internally. The Court, following the reasoning of the Advocate General, found that in this regard the international agreement was the external equivalent of the legislative act.
In line with its previous jurisprudence, the Court accepted that public security is an objective of public interest capable of justifying even serious interferences with Articles 7 and 8 EUCFR. It also noted that everybody has the right to security of the person (Art. 6 EUCFR), though this point was taken no further. The Court considered that PNR data revealed only limited aspects of a person’s private life, so that the essence of the right was not adversely affected [para 151]. In principle, limitation may then be possible. The Court accepted that PNR data transfer was appropriate, but not that the test of necessity was satisfied. It agreed with the Advocate General that the categories of data to be transferred were not sufficiently precise, specifically ‘available frequent flyer and benefit information (free tickets, upgrades, etc.)’, ‘all available contact information (including originator information)’ and ‘general remarks including Other Supplementary Information (OSI), Special Service Information (SSI) and Special Service Request (SSR) information’. Although the agreement required the Canadian authorities to delete any data transferred to them which fell outside these categories, this obligation did not compensate for the lack of precision regarding the scope of these categories.
The Court noted that the agreement identified a category of ‘sensitive data’; it was therefore to be presumed that sensitive data would be transferred under the agreement. The Court then reasoned:
any measure based on the premiss that one or more of the characteristics set out in Article 2(e) of the envisaged agreement may be relevant, in itself or in themselves and regardless of the individual conduct of the traveller concerned, having regard to the purpose for which PNR data is to be processed, namely combating terrorism and serious transnational crime, would infringe the rights guaranteed in Articles 7 and 8 of the Charter, read in conjunction with Article 21 thereof [para 165]
Additionally, any transfer of sensitive data would require a ‘precise and particularly solid’ reason beyond that of public security and prevention of terrorism. This justification was lacking. The transfer of sensitive data and the framework for the use of those data would be incompatible with the EU Charter [para 167].
While the agreement tried to limit the impact of automated decision-making, the Court found it problematic because of the need to have reliable models on which the automated decisions were made. These models, in the view of the Court, must produce results that identify persons under a ‘reasonable suspicion’ of participation in terrorist offences or serious transnational crime and should be non-discriminatory. Models/databases should also be kept up-to-date and accurate and subject to review for bias. Because of the error risk, all positive automated decisions should be individually checked.
In terms of the purposes for processing the data, the definition of terrorist offences and serious transnational crime were sufficiently clear. There were however other provisions, allowing case-by-case assessment. These provisions (Article 3(5)(a) and (b) of the treaty) were found to be too vague. By contrast, the Court determined that the authorities who would receive the data were sufficiently identified. Further, it accepted that the transfer of data of all passengers, whether or not they were identified as posing a risk or not, does not exceed what is necessary as passengers must comply with Canadian law and ‘the identification, by means of PNR data, of passengers liable to present a risk to public security forms part of border control’ [para 188].
Relying on its recent judgment in Tele2/Watson (Joined Cases C‑203/15 and C‑698/15, EU:C:2016:970), which I discussed here, the Court reiterated that there must be a connection between the data retained and the objective pursued for the duration of the time the data are held, which brought into question the use of the PNR data after passengers had disembarked in Canada. Further, the use of the data must be restricted in accordance with those purposes. However,
where there is objective evidence from which it may be inferred that the PNR data of one or more air passengers might make an effective contribution to combating terrorist offences and serious transnational crime, the use of that data does not exceed the limits of what is strictly necessary [para 201].
Following verification of passenger data and permission to enter Canadian territory, the use of PNR data during passengers’ stay must be based on new justifying circumstances. The Court expected that this should be subject to prior review by an independent body. The Court held that the agreement did not meet the required standards. Similar points were made, even more strongly, in relation to the use of PNR data after the passengers had left Canada. In general, this was not strictly necessary, as there would no longer be a connection between the data and the objective pursued by the PNR Agreement such as to justify the retention of their data. PNR data may be stored in Canada, however, when particular passengers present a risk of terrorism of serious transnational crime. Moreover, given the average lifespan of international serious crime networks and the duration and complexity of investigations relating to them, the Court did not hold that the retention of data for five years went beyond the limits of necessity [para 209].
The agreement allows PNR data to be disclosed by the Canadian authority to other Canadian government authorities and to government authorities of third countries. The recipient country must satisfy EU data protection standards; an international agreement between the third country and the EU or an adequacy decision would be required. There is a further, unlimited and ill-defined possibility of disclosure to individuals ‘subject to reasonable legal requirements and limitations … with due regard for the legitimate interests of the individual concerned’. This provision did not satisfy the necessity test.
To ensure that the individuals’ rights to access their data and to have data rectified is protected, in line with Tele2/Watson, passengers must be notified of the transfer of their PNR data to Canada and of its use as soon as that information is no longer liable to jeopardise the investigations being carried out by the government authorities referred to in the envisaged agreement. In this respect, the agreement is deficient. While passengers are told that the data will be used for security checks/border control, they are not told whether their data has been used by the Canadian Competent Authority beyond use for those checks. While the Court accepted that the agreement provided passengers with a possible remedy, the agreement was deficient in that it did not guarantee in a sufficiently clear and precise manner that the oversight of compliance would be carried out by an independent authority, as required by Article 8(3) EUCFR.Comment
There are lots of issues in this judgment, of interest from a range of perspectives, but its length and complexity means it is not an easy read. Because of these characteristics, a blog – even a lengthy blog – could hardly do justice to all issues, especially as in some instances, it is hardly clear what the Court’s position is.
On the whole the Court follows the approach of its Advocate General, Mengozzi, on a number of points specifically referring back to his Opinion. There is, as seems increasingly to be the trend, heavy reliance on existing case law and it is notable that the Court refers repeatedly to its ruling in Tele2/Watson. This may be a judicial attempt to suggest that Tele2/Watson was not an aberration and to reinforce its status as good law, if that were in any doubt. It also operates to create a body of surveillance law rulings that are hopefully consistent in underpinning principles and approach, and certainly some of the points in earlier case law are reiterated with regards to the importance of ex ante review by independent bodies, rights of redress and the right of individuals to know that they have been subject to surveillance.
The case is of interest not only in regards mass surveillance but more generally in relation to Article 16(2) TFEU. It is also the first time an opinion has been given on a draft agreement considering its compatibility with human rights standards as well as the appropriate Treaty base. In this respect the judgment may be a little disappointing; certainly on Article 16, the Court did not go into the same level of detail as in the AG’s opinion [AG114-AG120]. Instead it equated Article 16 TFEU to Article 8 EUCFR, and based its analysis on the latter provision.
As a general point, it is evident that the Court has adopted a detailed level of review of the PNR agreement. The outcome of the case has widely been recognised as having implications, as –for example – discussed earlier on this blog. Certainly, as the Advocate General noted, possible impact on other PNR agreements [AG para 4] which relate to the same sorts of data shared for the same objectives. The EDPS made this point too, in the context of the EU PNR Directive:
Since the functioning of the EU PNR and the EU-Canada schemes are similar, the answer of the Court may have a significant impact on the validity of all other PNR instruments … [Opinion 2/15, para 18]
There are other forms of data sharing agreement, for example, SWIFT, the Umbrella Agreement, the Privacy Shield (and other adequacy decisions) the last of which is coming under pressure in any event (DRI v Commission (T-670/16) and La Quadrature du Net and Others v Commission (T-738/16)). Note that in this context, there is not just a question of considering the safeguards for protection of rights but also relates to Treaty base. The Court found that Article 16 must be used and that – because there was no role for judicial authorities, still less their cooperation – the use of Article 82(1)(d) is wrong. It has, however, been used for example in regards to other PNR agreements. This means that that the basis for those agreements is thrown into doubt.
While the Court agreed with its Advocate General to suggest that a double Treaty base was necessary given the inextricable linkage, there is some room to question this assumption. It could also be argued that there is a dominant purpose, as the primary purpose of the PNR agreement is to protect personal data, albeit with a different objective in view, that of public security. In the background, however, is the position of the UK, Ireland and Denmark and their respective ‘opt-outs’ in the field. While a finding of a joint Treaty base made possible the argument of the Court that:
since the decision on the conclusion of the envisaged agreement must be based on both Article 16 and Article 87 TFEU and falls, therefore, within the scope of Chapter 5 of Title V of Part Three of the FEU Treaty in so far as it must be founded on Article 87 TFEU, the Kingdom of Denmark will not be bound, in accordance with Articles 2 and 2a of Protocol No 22, by the provisions of that decision, nor, consequently, by the envisaged agreement. Furthermore, the Kingdom of Denmark will not take part in the adoption of that decision, in accordance with Article 1 of that protocol. [para 113, see also para 115]
The position would, however, have been different had the agreement be found to have been predominantly about data protection and therefore based on Article 16 TFEU alone.
Looking at the substantive issues, the Court clearly accepted the need for PNR to challenge the threat from terrorism, noting in particular that Article 6 of the Charter (the “right to liberty and security of person”) can justify the processing of personal data. While it accepted that this resulted in systemic transfer of large quantities of people, we see no comments about mass surveillance. Yet, is this not similar to the ‘general and indiscriminate’ collection and analysis rejected by the Court in Tele2/Watson [para 97], and which cannot be seen as automatically justified even in the context of the fight against terrorism [para 103 and 119]? Certainly, the EDPS took the view in its opinion on the EU PNR Directive that “the non-targeted and bulk collection and processing of data of the PNR scheme amount to a measure of general surveillance” [Opinion 1/15, para 63]. It may be that the difference is in the nature of the data; even if this is so, the Court does not make this argument. Indeed, it makes no argument but rather weakly accepts the need for the data. On this point, it should be noted that “the usefulness of large-scale profiling on the basis of passenger data must be questioned thoroughly, based on both scientific elements and recent studies” [Art. 29 WP Opinion 7/2010, p. 4]. In this aspect, Opinion 1/15 is not as strong a stand as Tele2/Watson [c.f para 105-106]; it seems that the Court was less emphatic about significance of surveillance even than the Advocate General [AG 176].
In terms of justification, while the Court accepts that the transfer of data and its analysis may give rise to intrusion, it suggests that the essence of the right has not been affected. In this it follows the approach in the communications data cases. It is unclear, however, what the essence of the right is; it seems that no matter how detailed a picture of an individual can be drawn from the analysis of data, the essence of the right remains intact. If the implication is that where the essence of the right is affected then no justification for the intrusion could be made, a narrow view of essence is understandable. This does not, however, answer the question of what the essence is and, indeed, whether the essence of the right is the same for Article 7 as for Article 8. In this case, the Court has once again referred to both articles, without delineating the boundaries between them, but then proceeded to base its analysis mainly on Article 8.
In terms of relationship between provisions, it is also unclear what the relationship is between Art 8(2) and Art 52. The Court bundles the requirements for these two provisions together but they serve different purposes. Article 8(2) further elaborates the scope of the right; Article 52 deals with the limitations of Charter rights. Despite this, it seems that some of the findings will apply Article 52 in the context of other rights. For example, in considering that an international agreement constitutes law for the purposes of the EUCFR, the Court took a broader approach to meaning of ‘law’ than the Parliament had argued for. This however seems a sensible approach, avoiding undue formality.
One further point about the approach to interpreting exceptions to the rights and Article 52 can be made. It seems that the Court has not followed the Advocate General who had suggested that strict necessity should be understood in the light of achieving a fair balance [AG207].
Some specific points are worth highlighting. The Court held that sensitive data (information that reveals racial or ethnic origin, political opinions, religious or philosophical beliefs, trade-union membership, information about a person’s health or sex life) should not be transferred. It is not clear what interpretation should be given to these data, especially as regards proxies for sensitive data (e.g. food preferences may give rise to inferences about a person’s religious beliefs).
One innovation in the PNR context is the distinction the Court introduced between use of PNR data on entry, use while the traveller is in Canada, and use after the person has left, which perhaps mitigates the Court’s acceptance of undifferentiated surveillance of travellers. The Court’s view of the acceptability of use in relation to this last category is the most stringent. While the Court accepts the link between the processing of PNR data on arrival, after departure the Court expects that link to be proven, and absent such proof, there is no justification for the retention of data. Does this mean that on departure PNR data of persons who are not suspected of terrorism or transnational crime should be deleted at the point of their departure? Such a requirement surely gives rise to practical problems and would seem to limit the Court’s earlier acceptance of the use of general PNR data to verify/update computer models [para 198].
One of the weaknesses of the Court’s case law so far has been a failure to consider investigatory techniques, and whether all are equally acceptable. Here we see the Court beginning to consider the use of automated intelligence techniques. While the Court does not go into detail on all the issues to which predictive policing and big data might give rise, it does note that models must be accurate. It also refers to Article 21 EUCFR (discrimination). In that this section is phrased in general terms, it has potentially wide-reaching application, potentially even beyond the public sector.
The Court’s judgment has further implications as regards the sharing of PNR and other security data with other countries besides Canada, most notably in the context of EU/UK relations after Brexit. Negotiators now have a clearer indication of what it will take for an agreement between the EU and a non-EU state to satisfy the requirements of the Charter, in the ECJ’s view. Time will tell what impact this ruling will have on the progress of those talks.
Barnard & Peers: chapter 25
JHA4: chapter II:9
You can follow Steve’s blog – and get other updates on EU law developments – on Facebook and Twitter.
On Facebook, simply ‘like’ the blog on its dedicated Facebook page.
On Twitter, you can follow the blog editor, Steve Peers.
Marion Oswald Senior Fellow at The Centre for Information Rights, University of Winchester contributes to the blog, examining the British and Irish Law Education and Technology Association (BILETA) consultation run by The Centre of Information Rights. The consultation took place on the 7 June 2017 and concerned the impact of broadcast and social media on the privacy and best interests of young children.
In 1985, in his book ‘Amusing Ourselves to Death’, Professor Neil Postman warned us that:
‘To be unaware that a technology comes equipped with a programme for social change, to maintain that technology is neutral…is…stupidity plain and simple.’
Postman was mainly concerned with the impact of television, with its presentation of news as ‘vaudeville’ and thus its influence on other media to do the same. He was particularly exercised by the co-option by television of ‘serious modes of discourse – news, politics, science, education, commerce, religion’ and the transformation of these into entertainment packages. This ‘trivial pursuit’ information environment, he argued, risked amusing us into indifference and a kind of ‘culture-death.’ Can a culture survive ‘if the value of its news is determined by the number of laughs it provides’?
We appear not to have heeded Postman’s warnings. Many might say they are over-blown. 21st century digital culture continues to emphasise the image, now often combined with ‘bite-sized’ written messages. We have instant 24/7 access to information and rolling news. We are fascinated by reality programming and digital technologies that allow us to scrutinise each other’s lives. Having lived through this digital revolution, I know and appreciate its benefits, especially in relation to the expansion of knowledge and horizons, and to freedom of expression. Like many others, however, I have concerns. What, for instance, would Postman have made of ‘sharenting’; of the ‘YouTube Families’ phenomenon; of the way that younger and younger children now feature on mainstream broadcasts, with public comment via social media using the inevitable hashtag?
It was such concerns that inspired the BILETA consultation workshop held on 7 June 2017 at IALS to discuss the legislative, regulatory and ethical framework surrounding the depiction of young children on digital, online and broadcast media. The full report from the workshop is now available here. As was to be expected, the discussion was wide-ranging with a variety of opinions expressed. The report’s authors have attempted to distil the debate into proposals which we hope will move the debate forward, and generate further discussion and no doubt criticism! (The recommendations represent the views of the report’s authors and do not necessarily represent the views of workshop participants.)
The workshop focused first on a child’s ‘reasonable expectation of privacy’, a concept that was described by one participant as ‘highly artificial and strained’. Why should a child’s privacy depend upon his or her parent’s privacy expectations, it was asked? The report’s authors propose that young children should have a privacy right independent from their parents’ privacy expectations. Such a right could be trumped by other rights or interests, for instance public interest exceptions relating to news and current affairs reporting, journalism and the arts, and the parents’ right to freedom of expression. The report’s authors recommend that there should however be a clearer requirement and process for the child’s interests to be considered alongside the potential benefits of the publication.
Mainstream broadcasters take a variety of approaches to the depiction, and identification, of young children in documentary and science programming. The media should continue to reflect the lives of children and it is in no-one’s interests to have a media where children simply do not appear for fear of the risk of potential harm. Programmes made by highly regulated broadcasters, ensuring wellbeing of children is of paramount importance, can help to set the high ethical watermark in this area for other forms of media to follow. We should continue to monitor the inclusion of young children in ‘Science Entertainment’ broadcasts, however, and the parallel impact of social media. The report’s authors also recommend that more research be undertaken into the impact of broadcast media exposure of young children to understand what effect it has on them, both positive and negative. Once these effects are more fully understood, actions can be taken to reduce any potential harm.
There was some support during the workshop for the view that online intermediaries should take on more responsibility for activities and content that may be harmful to young children. The report’s authors recommend more consistency in terms of compliance and regulation between regulated broadcasters and non-mainstream digital media/social media. This could enhance protection to children in ‘YouTube families’ and other instances where there are no or limited checks on what is being put into the public domain. ‘Controller hosts’ (such as Facebook, YouTube and Twitter) and ‘independent intermediaries’ (such as Google) should have a duty of care to consider young children’s privacy and best interests in their operations. Further research should be undertaken into the potential of image-matching, tracking and content moderation technologies, in controlling the extent to which information and images relating to a young child can be copied, re-contextualised and re-shown in a different context to the original post or publication.
The introduction of a Children’s Digital Ombudsman could provide a way for children’s interests to be better represented in relation to all forms of digital publications. We shouldn’t put all our eggs in the basket of the so-called ‘right to be forgotten’. Before Postman’s warnings become irreversible reality, let’s consider how we want our young children to be treated in the offline world and strive to hold the digital world to the same standards.
 Defined by David Erdos in Erdos, David, ‘Delimiting the Ambit of Responsibility of Intermediary Publishers for Third Party Rights in European Data Protection: Towards a Synthetic Interpretation of the EU acquis’ (June 27, 2017). Available at SSRN: https://ssrn.com/abstract=2993154
On the 17 November 2017, the Information Law & Policy Centre will be holding their 3rd Annual Workshop and Lecture entitled, Children and Digital Rights: Regulating Safeguards and Freedoms. See the Call for Papers.