Information Law & Policy Centre Blog
In this guest post, Faith Gordon, University of Westminster explores how, under UK law, a child’s anonimity is not entirely guaranteed. Faith is speaking at the Information Law and Policy Centre’s annual conference – Children and Digital Rights: Regulating Freedoms and Safeguards this Friday, 17 November.
Under the 1948 Universal Declaration of Human Rights, each individual is presumed innocent until proven guilty. A big part of protecting this principle is guaranteeing that public opinion is not biased against someone that is about to be tried in the courts. In this situation, minors are particularly vulnerable and need all the protection that can be legally offered. So when you read stories about cases involving children, it’s often accompanied with the line that the accused cannot be named for legal reasons.
However, a loophole exists: a minor can be named before being formally charged. And as we all know in this digital age, being named comes with consequences – details or images shared of the child are permanent. While the right to be forgotten is the strongest for children within the Data Protection Bill, children and young people know that when their images and posts are screenshot they have little or no control over how they are used and who has access to them.
Should a child or young person come into conflict with the law, Section 44 of the Youth Justice and Criminal Evidence Act 1999 could offer pre-charge protection for them as minors, but it has never been enacted.
The latest consideration of this issue was during debates in the House of Lords in July 2014 and October 2014. It was decided that the aims of Section 44 could be achieved by protections from media regulatory bodies. But given that, in reality, regulatory bodies and their codes of practice don’t adequately offer protection to minors pre-charge, the government’s failure to enact this section of the law is arguably contrary to Article 8 of the European Convention on Human Rights, which is the right to respect for private and family life.Once you’re named …
This failure is now exposing a young 15-year-old child. Private details about him were published in print media, online and by other individuals on social media, after he was questioned by the Police Service of Northern Ireland in respect of the alleged TalkTalk hacking incident.
This alleged hacking has been described as one of the largest and most public contemporary incidents of cybercrime in the UK. And legal proceedings in the High Court were required to ensure that organisations, such as Google and Twitter, removed the child’s details from their platforms and to also restrain further publication of the child’s details. But despite injunctions being issued, internet searches are still revealing details about the identity of the 15-year-old.
The attempt to remedy the issue of this child’s identification online highlights the problem of dealing with online permanency. Once the horse has bolted it’s hard to get it back in.
This issue has arisen in a range of high profile cases where children and young people have been accused of involvement in crime. One example is the murder of Ann Maguire, a teacher in Leeds who was murdered in 2014.
When the incident was first reported, many of the newspapers published various details about the accused 15-year-old, including information about where he lived and his family upbringing. The Sun newspaper “outed” the 15-year-old by printing his name.
Allowing the media free rein to name a child before they are charged can later prejudice the fairness of their trial if it proceeds to court. This is what occurred in the case of Jon Venables and Robert Thompson, two ten-year-old boys who were convicted of the murder of a two-year-old. Their lawyers claimed that media reporting had undermined the chances of a fair trial and this had breached their rights. The European Court of Human Rights in its judgment in 1999 ruled that the boys did not receive a fair trial.
While the Northern Ireland judiciary states that there is protection through media regulatory guidelines, my research demonstrates that the revised IPSO Code of Practice – which came into force in January 2016 – fails to provide crucial advice to journalists on the use of social media and online content.
I have called for a clear set of enforceable guidelines for the media, which states that children’s and young people’s social media imagery and comments should not be reprinted or published without their fully informed consent and that all decision making should reflect children’s best interests.Consequences
This form of publishing details is a form of naming and shaming, which can encourage or stir up anger, resentment and retaliation in communities. In today’s media hungry world, the chase reveal as much as possible – but it is worrying especially when this naming is done before charge and uses a loophole.
Children who are already vulnerable are placed at further risk. Research I have conducted over the past ten years clearly demonstrates the significance of negative media representations on children and young people, and their manifestation in punishment attacks, beatings and exiling from their communities.
As a youth advocate who works with young people said during an interview with me in 2015: “Really in the society we live in you are guilty until proven innocent … basically people are looking at them [young people] and going ‘criminal’ … it is not right.” Several youth workers I also interviewed stated that releasing details or imagery of children “could damage their health, well-being and future job prospects” and they discussed examples of how identification in the media “led to them getting shot or a beating” in communities.
A report by the Standing Committee for Youth Justice – an alliance of organisations aiming to improve the youth justice system in England and Wales – proposed that in the digital age a legal ban on publishing children’s details at any time during their contact with the legal system is the only safeguard.
It is clear that legislators, policymakers and the media regulatory bodies need to keep up with advances in online and social media practices to ensure that children’s rights are not being breached. Addressing this loophole in the legislation is one step that is urgently required because media regulatory bodies currently lack clarity and suitable ethical guidelines on this issue.
The gap within the criminal justice legislative framework needs to urgently be addressed. Unless it is, there could be further case examples of children who may not go on to be charged but have their details published, shared, disseminated and permanently accessible via a basic internet search.
In this guest post Dr Daniel R. Thomas, University of Cambridge reviews research surrounding ethical issues in research using datasets of illicit origin. This post first appeared on “Light Blue Touchpaper” weblog written by researchers in the Security Group at the University of Cambridge Computer Laboratory.
On Friday at IMC I presented our paper “Ethical issues in research using datasets of illicit origin” by Daniel R. Thomas, Sergio Pastrana, Alice Hutchings, Richard Clayton, and Alastair R. Beresford. We conducted this research after thinking about some of these issues in the context of our previous work on UDP reflection DDoS attacks.
Data of illicit origin is data obtained by illicit means such as exploiting a vulnerability or unauthorized disclosure, in our previous work this was leaked databases from booter services. We analysed existing guidance on ethics and papers that used data of illicit origin to see what issues researchers are encouraged to discuss and what issues they did discuss. We find wide variation in current practice. We encourage researchers using data of illicit origin to include an ethics section in their paper: to explain why the work was ethical so that the research community can learn from the work. At present in many cases positive benefits as well as potential harms of research, remain entirely unidentified. Few papers record explicit Research Ethics Board (REB) (aka IRB/Ethics Commitee) approval for the activity that is described and the justifications given for exemption from REB approval suggest deficiencies in the REB process. It is also important to focus on the “human participants” of research rather than the narrower “human subjects” definition as not all the humans that might be harmed by research are its direct subjects.
In this guest post, Claire Bessant, Northumbria University, Newcastle, looks into the phenomenon of ‘sharenting’. Her article is relevant to the Information Law and Policy Centre’s annual conference coming up in November – Children and Digital Rights: Regulating Freedoms and Safeguards.
A toddler with birthday cake smeared across his face, grins delightedly at his mother. Minutes later, the image appears on Facebook. A not uncommon scenario – 42% of UK parents share photos of their children online with half of these parents sharing photos at least once a month.
Welcome to the world of “sharenting” – where more than 80% of children are said to have an online presence by the age of two. This is a world where the average parent shares almost 1,500 images of their child online before their fifth birthday.
But while a recent report from OFCOM confirms many parents do share images of their children online, the report also indicates that more than half (56%) of parents don’t. Most of these non-sharenting parents (87%) actively choose not to do so to protect their children’s private lives.Over sharing
Parents often have good reasons for sharenting. It allows them to find and share parenting advice, to obtain emotional and practical support, and to maintain contact with relatives and friends.
Increasingly, though, concerns are being raised about “oversharenting” – when parents share too much, or, share inappropriate information. Sharenting can result in the identification of a child’s home, childcare or play location or the disclosure of identifying information which could pose risks to the child.
While many sharenters says they are conscious of the potential impact of their actions, and they consider their children’s views before sharenting, a recent House of Lords report on the matter suggests not all parents do. The “growing up with the internet” report reveals some parents share information they know will embarrass their children – and some never consider their children’s interests before they post.
A recent survey for CBBC Newsround also warns that a quarter of children who’ve had their photographs sharented have been embarrassed or worried by these actions.Think of the kids
Police in France and Germany have taken concrete steps to address sharenting concerns. They have posted Facebook warnings, telling parents of the dangers of sharenting, and stressing the importance of protecting children’s private lives.
Back in the UK, some academics have suggested the government should educate parents to ensure they understand the importance of protecting their child’s digital identity. But should the “nanny state” really be interfering in family life by telling parents how and when they can share their children’s information?
It’s clearly a tricky area to regulate, but it could be that the government’s recently published data protection bill may provide at least a partial answer.
In its 2017 manifesto, the Conservative party pledged to:
Give people new rights to ensure they are in control of their own data, including the ability to require major social media platforms to delete information.
In the recent Queen’s Speech, the government confirmed its commitment to reforming data protection law. And in August, it published a statement of intent providing more detail of its proposed reforms. In relation to the so-called “right to be forgotten” or “right to erasure”, the government states that:
Individuals will be able to ask for their personal data to be erased.
Users will also be able to ask social media platforms to delete information they posted during their childhood. In certain circumstances, social media companies will be required to delete any or all of a user’s posts. The statement explains:
For example, a post on social media made as a child would normally be deleted upon request, subject to very narrow exemptions.
The primary purpose of the data protection bill is to bring the new EU General Data Protection Regulation into UK law. This is to ensure UK law continues to accord with European data protection law post-Brexit – which is essential if UK companies are to continue to trade with their European counterparts.
It could also provide a solution for children whose parents like to sharent, because the new laws specify that an individual or organisation must obtain explicit consent or have some other legitimate basis to share an individual’s personal data. In real terms, this means that before a parent shares their child’s information online they should ask whether the child agrees.
Of course, this doesn’t mean parents are suddenly going to start asking for their children’s consent to sharent. But if a parent doesn’t obtain their child’s consent, or the child decides in the future that they are no longer happy for that sharented information to be online, the bill also provides another possible solution. Children could use the “right to erasure” to ask for social network providers and other websites to remove sharented information. Not perhaps a perfect answer, but for now it’s one way to put a stop to those embarrassing mugshots ending up in cyberspace for years to come.
The 5th interdisciplinary Conference on Trust, Risk, Information & the Law will be held on 25 April 2018 at the Holiday Inn, Winchester UK. Our overall theme for this conference will be: “Public Law, Politics and the Constitution: A new battleground between the Law and Technology?”
Our keynote speaker will be Jamie Bartlett, Director of the Centre for the Analysis of Social Media for Demos in conjunction with the University of Sussex, and author of several books including ‘Radicals’ and ‘The Dark Net’.
Papers are welcomed on any aspect of the conference theme. This might include although is not restricted to:
- Fake news: definition, consequences, responsibilities and liabilities;
- The use of Big Data in political campaigning;
- Social media ‘echo chambers’ and political campaigning;
- Digital threats and impact on the political process;
- The Dark Net and consequences for the State and the Constitution;
- Big Tech – the new States and how to regulate them;
- The use of algorithmic tools and Big Data by the public sector;
- Tackling terrorist propaganda and digital communications within Constitutional values;
- Technology neutral legislation;
- Threats to individual privacy and public law solutions;
- Online courts and holding the State to account.
Proposals for workshops are also welcome.
The full call for papers and workshops can be found at https://journals.winchesteruniversitypress.org/index.php/jirpp/pages/view/TRIL.
Deadline for submissions is 26 January 2018.
In this guest post, Professor of Law and Innovation at Queen’s University Belfast Daithí Mac Síthigh reviews the recent Information Law and Policy Centre seminar that explored Internet intermediaries and their legal role and obligations.
Taking stock of recent developments concerning the liability and duties associated with being an Internet intermediary (especially the provision of hosting and social media services) was the theme of a recent event at the Information Law and Policy Centre. In my presentation, starting from about 20 years ago, I reviewed the early statutory interventions, including the broad protection against liability contained in US law (and the narrower shield in respect of intellectual property!), and the conditional provisions adopted by the European Union in Directive 2000/31/EC (E-Commerce Directive), alongside developments in specific areas, such as defamation. The most recent 10 years, though, have seen a trend towards specific solutions for one area of law or another (what I called ‘fragmentation’ in 2013), as well as a growing body of caselaw on liability, injunctions, and the like (both from the Court of Justice of the EU and the domestic courts).
So in 2017, what do we see? I argued that if there ever were a consensus on what intermediaries should or should not be expected to do, it is certainly no longer the case. From the new provisions of the Digital Economy Act 2017 creating a statutory requirement for ISPs to block access to websites not compliant with the new UK rules on age verification for sexually explicit material, to the proposed changes to the Audiovisual Media Services Directive that would create new requirements for video sharing platforms, to the Law Commission’s recommendations on contempt of court and temporary removal of material in order to ensure fair proceedings, new requirements or at least the idea of tweaking the obligations are popping up here and there. This is also seen through the frequent exhortations to service providers, especially social media platforms, to do more about harassment, ‘terrorist’ material, and the like. As the Home Secretary put it in her speech to the Conservative party conference last week, she calls on internet companies ‘to bring forward technology solutions to rid […] platforms of this vile terrorist material that plays such a key role in radicalisation. Act now. Honour your moral obligations.’ Meanwhile, the European Commission’s latest intervention, a Communication on ‘tackling illegal content online’ promotes a ‘more aligned approach [to removing illegal content, which] would make the fight against illegal content more effective’ and ‘reduce the cost of compliance’ – yet at this stage lacks clarity on how to handle divergence in legality between member states, the interaction with liability issues, and human rights issues (including the emerging jurisprudence of the ECtHR on the topic).
The Economist summarised developments in 2017 as being a ‘global techlash’, while Warby J’s perceptive speech on media law pointed to the increased complexity of media law, ‘mainly, though not entirely’ as a result of legislative change. I called for a broader review of intermediary law in the UK (perhaps led by the Law Commissions in Scotland and England and Wales and the appropriate authorities in Northern Ireland), which would take a horizontal approach (i.e. encompassing multiple causes of action), address questions of power (though heeding Orla Lynskey’s caution that power in this context is not soley market power), considers liability, duties, and knock-on effects together (rather than the artificial separation of maintaining immunity while adding new burdens), and responds to Brexit.
Prof. Lorna Woods summarised the growing concerns about blanket models, emphasising a shift towards ‘procedural responsibility’ in systems such as the DEA. She highlighted the uncertainty about the status of the ECD’s no general obligation to monitor clause (article 15), which was never transposed into a specific provision in the UK, and the potential interaction between the proposed AVMSD amendments and UK-specific actions. James Michael framed the issue as influenced by a struggle between legal approaches and the behaviour of technological companies, and wondered whether an international approach (perhaps in the spirit of the OECD’s approach to data protection) would be more fruitful. Further discussion with an engaged audience included the interaction between the status of data controller and the provisions on intermediaries, the role of industry self-regulation, emerging questions of international trade law and harmonisation, and developments elsewhere e.g. injunctions against search engines in Canada.
Professor Daithí Mac Síthigh
20 Nov 2017, 17:30 to 20 Nov 2017, 19:30
Institute of Advanced Legal Studies, 17 Russell Square, London WC1B 5DR
As part of the University of London’s Being Human Festival, the Information Law and Policy Centre will be hosting a film and discussion panel evening at the Institute of Advanced Legal Studies.
One of the Centre’s key aims is to promote public engagement by bringing together academic experts, policy-makers, industry, artists, and key civil society stakeholders (such as NGOs, journalists) to discuss issues and ideas concerning information law and policy relevant to the public interest that will capture the public’s imagination.
This event will focus on the implications posed by the increasingly significant role of artificial intelligence (AI) in society and the possible ways in which humans will co-exist with AI in future, particularly the impact that this interaction will have on our liberty, privacy, and agency. Will the benefits of AI only be achieved at the expense of these human rights and values? Do current laws, ethics, or technologies offer any guidance with respect to how we should navigate this future society?
The primary purpose of this event is to particularly encourage engagement and interest from young adults (15-18 years) in considering the implications for democracy, civil liberties, and human rights posed by the increasing role of AI in society that affect their everyday decision-making as humans and citizens. A limited number of places for this event will also be available to the general public.
Confirmed speakers include:
Chair: Dr Nora Ni Loideain, Director and Lecturer in Law, Information Law and Policy Centre, University of London
- Dr Hamed Haddadi, Associate Professor at the Faculty of Engineering, Imperial College London and lead researcher of The Human-Data Interaction Project
- Hal Hodson, Technology Journalist at The Economist
- Professor John Naughton, Project Leader of the Technology and Democracy Project, University of Cambridge and columnist for The Observer
- Renate Samson, Chief Executive of leading human rights organisation Big Brother Watch
BOOKING: This event is free but advance booking is required.Book now
Readers of the Information and Law Policy Centre blog are invited to submit a call for papers for the Global Fake News and Defamation Symposium on the theme of ‘Fake News and Weaponized Defamation: Global Perspectives’Concept Note:
The notion of “fake news” has gained great currency in global popular culture in the wake of contentious social-media imbued elections in the United States and Europe. Although often associated with the rise of extremist voices in political discourse and, specifically, an agenda to “deconstruct” the power of government, institutional media, and the scientific establishment, fake news is “new wine in old bottles,” a phenomenon that has long historical roots in government propaganda, jingoistic newspapers, and business-controlled public relations. In some countries, dissemination of “fake news” is a crime that is used to stifle dissent. This broad conception of fake news not only acts to repress evidence-based inquiry of government, scientists, and the press; but it also diminishes the power of populations to seek informed consensus on policies such as climate change, healthcare, race and gender equality, religious tolerance, national security, drug abuse, poverty, homophobia, and government corruption, among others.
“Weaponized defamation” refers to the increasing invocation, and increasing use, of defamation and privacy torts by people in power to threaten press investigations, despite laws protecting responsible or non-reckless reporting. In the United States, for example, some politicians, including the current president, invoke defamation as both a sword and shield. Armed with legal power that individuals—and most news organizations—cannot match, politicians and celebrities, wealthy or backed by the wealth of others, can threaten press watchdogs with resource-sapping litigation; at the same time, some leaders appear to leverage their “lawyered-up” legal teams to make knowingly false attacks—or recklessly repeat the false attacks of others—with impunity.
Papers should have an international or comparative focus that engages historical, contemporary or emerging issues relating to fake news or “weaponized defamation.” All papers submitted will be fully refereed by a minimum of two specialized referees. Before final acceptance, all referee comments must be considered.
- Accepted papers will be peer reviewed and distributed during the conference to all attendees.
- Authors are given an opportunity to briefly present their papers at the conference.
- Accepted papers will be published in the Journal of International Media and Entertainment Law, the Southwestern Law Review, or the Southwestern Journal of International Law.
- Authors whose papers are accepted for publication will be provided with round-trip domestic or international travel (subject to caps) to Los Angeles, California, hotel accommodations, and complimentary conference registration.
Completed paper deadline: January 5, 2018 Submit an Abstract
The Journal of International Media & Entertainment Law is a faculty-edited journal published by the Donald E. Biederman Entertainment and Media Law Institute at Southwestern Law School, in cooperation with the American Bar Association’s Forum on Communications Law, and the ABA’s Forum on the Entertainment and Sports Industries.
In this guest post, Harry T Dyer, University of East Anglia, looks into the complicated relationship between social media and young people. His article is relevant to the Information Law and Policy Centre’s annual conference coming up in November – Children and Digital Rights: Regulating Freedoms and Safeguards.
Facebook’s latest attempt to appeal to teens has quietly closed its doors. The social media platform’s Lifestage app (so unsuccessful that this is probably the first time you’ve heard of it) was launched a little under a year ago to resounding apathy and has struggled ever since.
Yet, as is Silicon Valley’s way, Facebook has rapidly followed the failure of one venture with the launch of another one by unveiling a new video streaming service. Facebook Watch will host series of live and pre-recorded short-form videos, including some original, professionally made content, in a move that will allow the platform to more directly compete with the likes of YouTube, Netflix and traditional TV channels.
Lifestage was just one of a long series of attempts by Facebook to stem the tide of young people increasingly interacting across multiple platforms. With Watch, the company seems to have changed tack from this focus on retaining young people, instead targeting a much wider user base. Perhaps Facebook has learnt that it will simply never be cool –, but that doesn’t mean it still can’t be popular.
Lifestage was intended to compete with the increasingly popular Snapchat, the photo and video-sharing app especially popular among teenagers. But the spin-off was never able to achieve the user numbers necessary to sustain the venture. Worryingly for Facebook, this is the third failed attempt to emulate Snapchat’s success among teens, following the short-lived Facebook Poke and Facebook Slingshot, which also came to quiet and unceremonious ends. Facebook has also incorporated several of Snapchat’s features such as its Stories function directly into its main app, to a lukewarm reception.
This comes as the social media market continues to expand rapidly. Competition is fierce and numerous established companies are vying with start-ups and rising brands to catch the attention of a growing and increasingly connected user base.
No longer do one or two companies hold a monopoly on the social media landscape. Most teenagers are increasingly using more than one platform for their online interactions (though noticeably this trend does appear to be somewhat different outside the Western world). Young people are experimenting with new formats and ways of interacting, from short videos and disappearing messages, to anonymous feedback apps such as Sarahah, the latest craze to explode in popularity and excite media commentators.I don’t want my mum to see this.
Yet despite these issues, Facebook is still the world’s most popular social media platform by quite some distance and has more than 2 billion users worldwide. Recent data suggests it is almost as popular as Snapchat among teens and young users, as is Facebook’s other photo-sharing app, Instagram.
The problem, of course, is that Facebook’s popularity – and, crucially, the platform’s simplistic and user-friendly design – means teenagers’ parents, teachers, bosses and even grandparents now also use the platform. For teens, that means the platform has become a headache of competing and conflicting social obligations, with various aspects and contexts of their life collapsing into a single space.
The young people I talk to for my research suggest that Facebook’s broad appeal and easy design presents a unique experience for them. Facebook is a field of potential social landmines, with the fear that the diverse user base will see everything they post – causing anxiety, hedging and inaction.
Having to negotiate this broad audience means young people seem to be less likely to use of some of the public aspects of Facebook, choosing instead to rely on aspects such as groups and private messaging. This explains why they seem to be increasingly relying more on platforms such as Instagram and Snapchat to interact with their peers, a trend also noted by other researchers.
In this light, the attempt to encourage teenagers to use the same features as they do on Snapchat when Facebook’s brand is so associated with a more public and socially difficult environment seems inherently flawed. We can’t say where the company will go in the future but it seems likely it will struggle to ever be as central to young people’s online social experiences as it once was.Watch targets a wider audience
Yet the launch of Facebook Watch suggests perhaps the company has learnt its lesson. The new service is an attempt to create a broader space that can appeal to their wide user base, rather than aiming content, ideas and spaces specifically at teens and young people.
With the announcement of the video-sharing service, Facebook has put out a call for “community orientated” original shows. It will provide users with video recommendations based on what others – and in particular their friends – are watching. In this way, Facebook Watch will allow users to find content that reflects their interests and friendships, whoever they are. Rather than attempting to retain and target a specific demographic, Facebook Watch appears to be acknowledging the platform’s broader appeal.
This seems to match Facebook’s moves away from being a pure social networking platform and towards a much broader one-stop hub for news and content. With the launch of Watch, users need never leave the walled garden of Facebook as they can view both content embedded from around the web and original videos hosted on the site. And with Facebook already ranked second only to YouTube for online video content, again this move looks like an attempt to cater to a much broader market than teens alone.
The fact that Facebook seems increasingly keen to nurture its more diverse user base is likely to be a continuing concern for young people worried about their interactions on the platform. But, on the other hand, given YouTube’s massive appeal to the teen market, Watch may serve as a way to entice teens back to Facebook. Really, there’s only one way to sum up young people’s relationship with Facebook: it’s complicated.
In this guest post, Dr Natalia Kucirkova, UCL and Professor Sonia Livingstone, (London School of Economics and Political Science), explore ‘screen time’ as an outdated term and why we need to recognise the power of learning through screen-based technologies. Their article is relevant to the Information Law and Policy Centre’s annual conference coming up in November – Children and Digital Rights: Regulating Freedoms and Safeguards.
The idea of “screen time” causes arguments – but not just between children and their anxious parents. The Children’s Commissioner for England, Anne Longfield, recently compared overuse of social media to junk food and urged parents to regulate screen time using her “Digital 5 A Day” campaign.
This prompted the former director of Britain’s electronic surveillance agency, GCHQ, to respond by telling parents to increase screen time for children so they can gain skills to “save the country”, since the UK is “desperately” short of engineers and computer scientists.
Meanwhile, parents are left in the middle, trying to make sense of it all.
But the term “screen time” is problematic to begin with. A screen can refer to an iPad used to Skype their grandparents, a Kindle for reading poetry, a television for playing video games, or a desktop computer for their homework. Most screens are now multifunctional, so unless we specify the content, context and connections involved in particular screen time activities, any discussion will be muddled.
Measuring technology usage in terms of quantity rather than quality is also difficult. Children spend time on multiple devices in multiple places, sometimes in short bursts, sometimes constantly connected. Calculating the incalculable puts unnecessary pressure on parents, who end up looking at the clock rather than their children.
The Digital 5 A Day campaign has five key messages, covering areas like privacy, physical activity and creativity. Its focus on constructive activities and attitudes towards technology is a good start. Likewise, a key recommendation of the LSE Media Policy Project report was for more positive messaging about children’s technology use.
Technology use is complex and takes time to understand. Content matters. Context matters. Connections matter. Children’s age and capacity matters. Reducing this intricate mix to a simple digital five-a-day runs the risk of losing all the nutrients. Just like the NHS’s Five Fruit and Veg A Day Campaign, future studies will no doubt announce that five ought to be doubled to ten.
Another problem will come from attempts to interpret the digital five-a-day as a quality indicator. Commercial producers often use government campaigns to drive sales and interest in their products. If a so-called “educational” app claims that it “supports creative and active engagement”, parents might buy it – but there will be little guarantee that it will offer a great experience. It is an unregulated and confusing market – although help is currently provided by organisations providing evidence-based recommendations such as the NSPCC, National Literacy Trust, Connect Safely, Parent Zone, and the BBC’s CBeebies.
The constant flow of panicky media headlines don’t help parents or improve the level of public discussion. The trouble is that there’s too little delving into the whys and wherefores behind each story, nor much independent examination of the evidence that might (or might not) support the claims being publicised. Luckily, some bodies, such as the Science Media Centre, do try to act as responsible intermediaries.
Let your kids spend more time online to 'save the country', says ex-GCHQ chief https://t.co/NjWPYNPIM6
— Sky News (@SkyNews) August 8, 2017
When it comes to young people and technology, it’s vital to widen the lens – away from a close focus on time spent, to the reality of people’s lives. Today’s children grow up in increasingly stressed, tired and rushed modern families. Technology commentators often revert to food metaphors to call for a balanced diet or even an occasional digital detox, and that’s fine to a degree.
But they can be taken too far, especially when the underlying harms are contested by science. “One-size-fits-all” solutions don’t work when they are taken too literally, or when they become yet another reason to blame parents (or children), or because they don’t allow for the diverse conditions of real people’s lives.
If there is a food metaphor that works for technology, it’s that we should all try some humble pie when it comes to telling others how to live. “Screen time” is an outdated and misguided shorthand for all the different ways of interacting, creating and learning through screen-based technologies. It’s time to drop it.
In this guest post, Vladlena Benson, Kingston University, assesses the need to encourage conscious social media use among the young. Her article is relevant to the Information Law and Policy Centre’s annual conference coming up in November – Children and Digital Rights: Regulating Freedoms and Safeguards.
Teenagers in Britain are fortunate to have access to computers, laptops and smartphones from an early age. A child in the UK receives a smartphone at around the age of 12 – among the earliest in Europe. The natural consequence of this is that children spend a significant amount of their time on the internet. Nearly 20 years or so since the first social networks appeared on the internet, there has been considerable research into their psychological, societal, and health effects. While these have often been seen as largely negative over the years, there is plenty of evidence to the contrary.
A recent report from the Education Policy Institute, for example, studied children’s use of the internet and their mental health. The report found that teenagers value social networks as a way of connecting with friends and family, maintaining their networks of friends, and long distance connections. Teenagers see social networking as a comfortable medium for sharing their issues and finding solutions to problems such as social isolation and loneliness. They are also more likely to seek help in areas such as health advice, unknown experiences, and help with exams and study techniques.
Social networks afford the opportunity to find people with similar interests, or to support teamwork in school projects. In unsettled economic and political times, teenagers use social networks as a means to be heard and to get involved in political activism, as well as volunteering and charitable activities.
Teenagers also leverage social networks to engage with creative projects, and many young artists are first noticed through the exposure offered by the rich networking opportunities of social media, such as musicians on MySpace or photographers on image sharing sites Flickr or Behance. Teenagers looking to pursue careers in art or other creative industries turn to social platforms in order to create their portfolios as well as to create with others.
These opportunities have a positive impact on adolescent character formation and the development of their individual identity, and helps them toward choosing a career path. These choices are made at an early age and to this end social networks are enriching young people’s lives.Risks not to be ignored
On the other hand the report was able to list a substantial list of negative influences stemming from social media use, ranging from time wasting and addictive, compulsive use, to cyber-bullying, radicalisation, stress and sexual grooming to name just a few.
Unsurprisingly governments are concerned with the impact of social networking on the vulnerable. Concern over the uncontrolled nature of social networking has prompted action from parents and politicians. The issue of children roaming freely on social networks became an issue in the recent UK general election, and was mentioned in the Conservative party manifesto, which made a key pledge of “safety for children online, and new rights to require social media companies to delete information about young people as they turn 18”. This is a tall order, as it would require erasing tens of millions of teenagers’ profiles on around 20 different social platforms, hosted in different countries worldwide.
The Conservatives also suggested the party would “create a power in law for government to introduce an industry-wide levy from social media companies and communication service providers to support awareness and preventative activity to counter internet harms”. Awareness-raising is an important step towards encouraging conscious social media use among the young. But despite continuing efforts to educate youngsters about the dangers (and, to be fair, the benefits) of using social media, many are wary of the impact technology may have on overly-social teenagers once outside parental control.
It has been shown that teenagers increasingly use social networks in private, leaving parents outside environments where children are exposed to real-time content and largely unguarded instant communications. The concern raised in the report that “responses to protect, and build resilience in, young people are inadequate and often outdated” is timely. While schools are tasked with educating teenagers about the risks of social media, very few parents are able to effectively introduce controls on the content their children access and monitor the evolving threats that operate online.Speak their language
A recent study of compulsive social media use showed that it is not the user’s age that matters, but their individual motivations. In fact users who are highly sociable and driven by friends towards compulsive social media use suffer physically and socially. On the other hand when users are driven by hedonic (fun-seeking) motivations, their physical health and sociability improves. This explains why teenagers in the UK see social networking as a positive phenomenon that enriches their social life. There is clearly potential to harness these positives.
While the tech giants that run the social networks with billions of users must play their part to ensure the safety of their youngest users, it is also parents’ role to talk openly with their children about their use of social networks and demand expected standards of use. Teenagers have questions about life and are looking for answers to their problems as they go through a challenging time of life. With the prime minister naming “mental health as a key priority” schools, parents, politicians and social networking platforms should help teenagers to build resilience to what they encounter online and how it makes them feel, rather than adopting only a safeguarding approach. It’s interesting to note that 78% of young people who contact the organisation Childline now do so online: teachers, family and friends providing support should make the most of a medium which today’s children and teenagers are comfortable with.
Readers of the Information and Law Policy Centre blog are invited to participate in the second, full-day International Law for the Sustainable Development Goals Workshop at the Department of International Law, University of Groningen, NL.
Our aim with the second track of this one-day Workshop is to explore the right to science’s potential value in the context of technology & knowledge transfer and sustainable development. More specifically, we aim to discuss the role of the right to science as (a) a means to implement the SDGs and related human rights; (b) an enabler of international cooperation regarding technology and knowledge sharing; and (c) a stand-alone human right and the respective obligations of States in enhancing systemic policy and institutional coherence and informing policy development and coordination.
Please find the the detailed Call for Papers available here.
We invite abstract proposals from interested scholars from all disciplines. Proposals should not exceed 500 words in length. Please send your proposals as an attachment to Marlies Hesselman (email@example.com) for Track 1 and to Mando Rachovitsa (firstname.lastname@example.org) for Track 2. The deadline for abstracts is 15 September 2017. All proposals will undergo peer review and notifications of acceptance will be sent out by 20 September 2017.
Draft papers are expected to be delivered by the 15 November 2017 for circulation among participants. We plan to pursue the publication of a special issue as a result of this Workshop.
The Workshop is scheduled to take place on the 24th November 2017 at the University of Groningen and it is part of the 2017-2018 Workshop Series “International Law for the Sustainable Development Goals”.
Information Law and Policy Centre’s Annual Conference 2017 – Children and Digital Rights: Regulating Freedoms and Safeguards
Date 17 Nov 2017, 09:30 to 17 Nov 2017, 17:30
Venue Institute of Advanced Legal Studies, 17 Russell Square, London WC1B 5DR
The Internet provides children with more freedom to communicate, learn, create, share, and engage with society than ever before. For instance, research by Ofcom in 2016 found that 72% of young teenagers (twelve to fifteen) in the UK have social media accounts which are often used for homework groups. 20% of the same group have made their own digital music and 30% have used the Internet for civic engagement by signing online petitions, or sharing and talking online about the news.
Interacting within this connected digital world, however, also presents a number of challenges to ensuring the adequate protection of a child’s rights to privacy, freedom of expression, and safety, both online and offline. These risks range from children being unable to identify advertisements on search engines to bullying in online chat groups. Children may also be targeted via social media platforms with methods (such as fake online identities or manipulated photos/images) specifically designed to harm them or exploit their particular vulnerabilities and naivety.
At the ILPC’s Annual Conference, regulators, practitioners, civil society, and leading academic experts will address and examine the key legal frameworks and policies being used and developed to safeguard these freedoms and rights. These legislative and policy regimes will include the UN Convention on the Rights of the Child, and the related provisions (such as consent, transparency, and profiling) under the UK Digital Charter, and the Data Protection Bill which will implement the EU General Data Protection Regulation.
The ILPC’s Annual Conference and Lecture will take place on Friday 17th November 2017, followed by an evening reception.
Attendance will be free of charge thanks to the support of the IALS and our sponsors, although registration is required as places are limited.
Key speakers, chairs, and discussants at the Annual Conference will provide a range of national and international legal insights and perspectives from the UK, Israel, Australia, and Europe, and will include:
- Baroness Beeban Kidron OBE, Film-maker, Member of The Royal Foundation Taskforce on the Prevention of Cyberbullying, and Founder of 5Rights
- Anna Morgan, Head of Legal, Deputy Data Protection Commissioner of Ireland;
- Lisa Atkinson, Group Manager on Policy Engagement, Information Commissioner’s Office;
- Renate Samson, Chief Executive of Big Brother Watch;
- Graham Smith, Bird & Bird LLP, Solicitor and leading expert in UK Internet Law.
The best papers from the conference’s plenary sessions and panels will be featured in a special issue of Bloomsbury’s Communications Law journal, following a peer-review process. Those giving papers will be invited to submit full draft papers to the journal by 1st November 2017 for consideration by the journal’s editorial team.
About the Information Law and Policy Centre at the IALS:
The Information Law and Policy Centre (ILPC) produces, promotes, and facilitates research about the law and policy of information and data, and the ways in which law both restricts and enables the sharing, and dissemination, of different types of information.
The ILPC is part of the Institute of Advanced Legal Studies (IALS), which was founded in 1947. It was conceived, and is funded, as a national academic institution, attached to the University of London, serving all universities through its national legal research library. Its function is to promote, facilitate, and disseminate the results of advanced study and research in the discipline of law, for the benefit of persons and institutions in the UK and abroad.
The ILPC’s Annual Conference and Annual Lecture form part of a series of events celebrating the 70th Anniversary of IALS in November.
About Communications Law (Journal of Computer, Media and Telecommunications Law):
Communications Law is a well-respected quarterly journal published by Bloomsbury Professional covering the broad spectrum of legal issues arising in the telecoms, IT, and media industries. Each issue brings you a wide range of opinion, discussion, and analysis from the field of communications law. Dr Paul Wragg, Associate Professor of Law at the University of Leeds, is the journal’s Editor in Chief.
In this guest post Lorna Woods, Professor of Internet Law at the University of Essex, explores the EU’s proposed Passenger Name Record (PNR) agreement with Canada. This post first appeared on the blog of Steve Peers, Professor of EU, Human Rights and World Trade Law at the University of Essex.
Opinion 1/15 EU/Canada PNR Agreement, 26th July 2017Facts
Canadian law required airlines, in the interests of the fight against serious crime and terrorism, to provide certain information about passengers (API/PNR data), which obligation required airlines under EU data protection regulations to transfer data to outside the EU. The PNR data includes the names of air passengers, the dates of intended travel, the travel itinerary, and information relating to payment and baggage. The PNR data may reveal travel habits, relationships between two individuals, information on the financial situation or the dietary habits of individuals. To regularise the transfer of data, and to support police cooperation, the EU negotiated an agreement with Canada specifying the data to be transferred, the purposes for which the data could be used, as well as some processing safeguard provisions (e.g. use of sensitive data, security obligations, oversight requirements, access by passengers). The data was permitted to be retained for five years, albeit in a depersonalised form. Further disclosure of the data beyond Canada and the Member States was permitted in limited circumstances. The European Parliament requested an opinion from the Court of Justice under Article 218(11) TFEU as to whether the agreement satisfied fundamental human rights standards and whether the appropriate Treaty base had been used for the agreement.Opinion
The Court noted that the agreement fell within the EU’s constitutional framework, and must therefore comply with its constitutional principles, including (though this point was not made express), respect for fundamental human rights (whether as a general principle or by virtue of the EU Charter – the EUCFR).
After dealing with questions of admissibility, the Court addressed the question of appropriate Treaty base. It re-stated existing principles (elaborated, for example, in Case C‑263/14 Parliament v Council, judgment 14 June 2016, EU:C:2016:435) with regard to choice of Treaty base generally: the choice must rest on objective factors (including the aim and the content of that measure) which are amenable to judicial review. In this context the Court found that the proposed agreement has two objectives: safeguarding public security; and safeguarding personal data [opinion, para 90]. The Court concluded that the two objectives were inextricably linked: while the driver for the need to PNR data was protection of public security, the transfer of data would be lawful only if data protection rules were respected [para 94]. Therefore, the agreement should be based on both Article 16(2) (data protection) and Article 87(2)(a) TFEU (police cooperation). It held, however, that Article 82(1)(d) TFEU (judicial cooperation) could not be used, partly because judicial authorities were not included in the agreement.
Looking at the issue of data protection, the Court re-stated the question as being ‘on the compatibility of the envisaged agreement with, in particular, the right to respect for private life and the right to the protection of personal data’ [para 119]. It then commented that although both Article 16 TFEU and Article 8 EUCFR enshrine the right to data protection, in its analysis it would refer to Article 8 only, because that provision lays down in a more specific manner the conditions for data processing. The agreement refers to the processing of data concerning identified individuals, and therefore may affect the fundamental right to respect for private life guaranteed in Article 7 EUCFR as well as the right to protection to personal data in Article 8 EUCFR. The Court re-iterated a number of principles regarding the scope of the right to private life:
‘the communication of personal data to a third party, such as a public authority, constitutes an interference with the fundamental right enshrined in Article 7 of the Charter, whatever the subsequent use of the information communicated. The same is true of the retention of personal data and access to that data with a view to its use by public authorities. In this connection, it does not matter whether the information in question relating to private life is sensitive or whether the persons concerned have been inconvenienced in any way on account of that interference’ [para 124].
The transfer of PNR data and its retention and any use constituted an interference with both Article 7 [para 125] and Article 8 EUCFR [para 126]. In assessing the seriousness of the interference, the Court flagged ‘the systematic and continuous’ nature of the PNR system, the insight into private life of individuals, the fact that the system is used as an intelligence tool and the length of time for which the data is available.
Interferences with these rights may be justified. Nonetheless, there are constraints on any justification: Article 8(2) of the EU Charter specifies that processing must be ‘for specified purposes and on the basis of the consent of the person concerned or some other legitimate basis laid down by law’; and, according to Article 52(1) of the EU Charter, any limitation must be provided for by law and respect the essence of those rights and freedoms. Further, limitations must be necessary and genuinely meet objectives of general interest recognised by the Union or the need to protect the rights and freedoms of others.
Following WebMindLicenses (Case C‑419/14, judgment of 17 December 2015, EU:C:2015:832, para 81), the law that permits the interference should also set down the extent of that interference. Proportionality requires that any derogation from and limitation on the protection of personal data should apply only insofar as is strictly necessary. To this end and to prevent the risk of abuse, the legislation must set down ‘clear and precise rules governing the scope and application of the measure in question and imposing minimum safeguards’, specifically ‘indicat[ing] in what circumstances and under which conditions a measure providing for the processing of such data may be adopted’ [para 141], especially when automated processing is involved.
The Court considered whether there was a legitimate basis for the processing, noting that although passengers may be said to consent to the processing of PNR data, this consent related to a different purpose. The transfer of the PNR data is not conditional on the specific consent of the passengers and must therefore be grounded on some other basis, within the terms of Article 8(2) EUCFR. The Court rejected the Parliament’s submission that the meaning of ‘law’ be restricted to ‘legislative act’ internally. The Court, following the reasoning of the Advocate General, found that in this regard the international agreement was the external equivalent of the legislative act.
In line with its previous jurisprudence, the Court accepted that public security is an objective of public interest capable of justifying even serious interferences with Articles 7 and 8 EUCFR. It also noted that everybody has the right to security of the person (Art. 6 EUCFR), though this point was taken no further. The Court considered that PNR data revealed only limited aspects of a person’s private life, so that the essence of the right was not adversely affected [para 151]. In principle, limitation may then be possible. The Court accepted that PNR data transfer was appropriate, but not that the test of necessity was satisfied. It agreed with the Advocate General that the categories of data to be transferred were not sufficiently precise, specifically ‘available frequent flyer and benefit information (free tickets, upgrades, etc.)’, ‘all available contact information (including originator information)’ and ‘general remarks including Other Supplementary Information (OSI), Special Service Information (SSI) and Special Service Request (SSR) information’. Although the agreement required the Canadian authorities to delete any data transferred to them which fell outside these categories, this obligation did not compensate for the lack of precision regarding the scope of these categories.
The Court noted that the agreement identified a category of ‘sensitive data’; it was therefore to be presumed that sensitive data would be transferred under the agreement. The Court then reasoned:
any measure based on the premiss that one or more of the characteristics set out in Article 2(e) of the envisaged agreement may be relevant, in itself or in themselves and regardless of the individual conduct of the traveller concerned, having regard to the purpose for which PNR data is to be processed, namely combating terrorism and serious transnational crime, would infringe the rights guaranteed in Articles 7 and 8 of the Charter, read in conjunction with Article 21 thereof [para 165]
Additionally, any transfer of sensitive data would require a ‘precise and particularly solid’ reason beyond that of public security and prevention of terrorism. This justification was lacking. The transfer of sensitive data and the framework for the use of those data would be incompatible with the EU Charter [para 167].
While the agreement tried to limit the impact of automated decision-making, the Court found it problematic because of the need to have reliable models on which the automated decisions were made. These models, in the view of the Court, must produce results that identify persons under a ‘reasonable suspicion’ of participation in terrorist offences or serious transnational crime and should be non-discriminatory. Models/databases should also be kept up-to-date and accurate and subject to review for bias. Because of the error risk, all positive automated decisions should be individually checked.
In terms of the purposes for processing the data, the definition of terrorist offences and serious transnational crime were sufficiently clear. There were however other provisions, allowing case-by-case assessment. These provisions (Article 3(5)(a) and (b) of the treaty) were found to be too vague. By contrast, the Court determined that the authorities who would receive the data were sufficiently identified. Further, it accepted that the transfer of data of all passengers, whether or not they were identified as posing a risk or not, does not exceed what is necessary as passengers must comply with Canadian law and ‘the identification, by means of PNR data, of passengers liable to present a risk to public security forms part of border control’ [para 188].
Relying on its recent judgment in Tele2/Watson (Joined Cases C‑203/15 and C‑698/15, EU:C:2016:970), which I discussed here, the Court reiterated that there must be a connection between the data retained and the objective pursued for the duration of the time the data are held, which brought into question the use of the PNR data after passengers had disembarked in Canada. Further, the use of the data must be restricted in accordance with those purposes. However,
where there is objective evidence from which it may be inferred that the PNR data of one or more air passengers might make an effective contribution to combating terrorist offences and serious transnational crime, the use of that data does not exceed the limits of what is strictly necessary [para 201].
Following verification of passenger data and permission to enter Canadian territory, the use of PNR data during passengers’ stay must be based on new justifying circumstances. The Court expected that this should be subject to prior review by an independent body. The Court held that the agreement did not meet the required standards. Similar points were made, even more strongly, in relation to the use of PNR data after the passengers had left Canada. In general, this was not strictly necessary, as there would no longer be a connection between the data and the objective pursued by the PNR Agreement such as to justify the retention of their data. PNR data may be stored in Canada, however, when particular passengers present a risk of terrorism of serious transnational crime. Moreover, given the average lifespan of international serious crime networks and the duration and complexity of investigations relating to them, the Court did not hold that the retention of data for five years went beyond the limits of necessity [para 209].
The agreement allows PNR data to be disclosed by the Canadian authority to other Canadian government authorities and to government authorities of third countries. The recipient country must satisfy EU data protection standards; an international agreement between the third country and the EU or an adequacy decision would be required. There is a further, unlimited and ill-defined possibility of disclosure to individuals ‘subject to reasonable legal requirements and limitations … with due regard for the legitimate interests of the individual concerned’. This provision did not satisfy the necessity test.
To ensure that the individuals’ rights to access their data and to have data rectified is protected, in line with Tele2/Watson, passengers must be notified of the transfer of their PNR data to Canada and of its use as soon as that information is no longer liable to jeopardise the investigations being carried out by the government authorities referred to in the envisaged agreement. In this respect, the agreement is deficient. While passengers are told that the data will be used for security checks/border control, they are not told whether their data has been used by the Canadian Competent Authority beyond use for those checks. While the Court accepted that the agreement provided passengers with a possible remedy, the agreement was deficient in that it did not guarantee in a sufficiently clear and precise manner that the oversight of compliance would be carried out by an independent authority, as required by Article 8(3) EUCFR.Comment
There are lots of issues in this judgment, of interest from a range of perspectives, but its length and complexity means it is not an easy read. Because of these characteristics, a blog – even a lengthy blog – could hardly do justice to all issues, especially as in some instances, it is hardly clear what the Court’s position is.
On the whole the Court follows the approach of its Advocate General, Mengozzi, on a number of points specifically referring back to his Opinion. There is, as seems increasingly to be the trend, heavy reliance on existing case law and it is notable that the Court refers repeatedly to its ruling in Tele2/Watson. This may be a judicial attempt to suggest that Tele2/Watson was not an aberration and to reinforce its status as good law, if that were in any doubt. It also operates to create a body of surveillance law rulings that are hopefully consistent in underpinning principles and approach, and certainly some of the points in earlier case law are reiterated with regards to the importance of ex ante review by independent bodies, rights of redress and the right of individuals to know that they have been subject to surveillance.
The case is of interest not only in regards mass surveillance but more generally in relation to Article 16(2) TFEU. It is also the first time an opinion has been given on a draft agreement considering its compatibility with human rights standards as well as the appropriate Treaty base. In this respect the judgment may be a little disappointing; certainly on Article 16, the Court did not go into the same level of detail as in the AG’s opinion [AG114-AG120]. Instead it equated Article 16 TFEU to Article 8 EUCFR, and based its analysis on the latter provision.
As a general point, it is evident that the Court has adopted a detailed level of review of the PNR agreement. The outcome of the case has widely been recognised as having implications, as –for example – discussed earlier on this blog. Certainly, as the Advocate General noted, possible impact on other PNR agreements [AG para 4] which relate to the same sorts of data shared for the same objectives. The EDPS made this point too, in the context of the EU PNR Directive:
Since the functioning of the EU PNR and the EU-Canada schemes are similar, the answer of the Court may have a significant impact on the validity of all other PNR instruments … [Opinion 2/15, para 18]
There are other forms of data sharing agreement, for example, SWIFT, the Umbrella Agreement, the Privacy Shield (and other adequacy decisions) the last of which is coming under pressure in any event (DRI v Commission (T-670/16) and La Quadrature du Net and Others v Commission (T-738/16)). Note that in this context, there is not just a question of considering the safeguards for protection of rights but also relates to Treaty base. The Court found that Article 16 must be used and that – because there was no role for judicial authorities, still less their cooperation – the use of Article 82(1)(d) is wrong. It has, however, been used for example in regards to other PNR agreements. This means that that the basis for those agreements is thrown into doubt.
While the Court agreed with its Advocate General to suggest that a double Treaty base was necessary given the inextricable linkage, there is some room to question this assumption. It could also be argued that there is a dominant purpose, as the primary purpose of the PNR agreement is to protect personal data, albeit with a different objective in view, that of public security. In the background, however, is the position of the UK, Ireland and Denmark and their respective ‘opt-outs’ in the field. While a finding of a joint Treaty base made possible the argument of the Court that:
since the decision on the conclusion of the envisaged agreement must be based on both Article 16 and Article 87 TFEU and falls, therefore, within the scope of Chapter 5 of Title V of Part Three of the FEU Treaty in so far as it must be founded on Article 87 TFEU, the Kingdom of Denmark will not be bound, in accordance with Articles 2 and 2a of Protocol No 22, by the provisions of that decision, nor, consequently, by the envisaged agreement. Furthermore, the Kingdom of Denmark will not take part in the adoption of that decision, in accordance with Article 1 of that protocol. [para 113, see also para 115]
The position would, however, have been different had the agreement be found to have been predominantly about data protection and therefore based on Article 16 TFEU alone.
Looking at the substantive issues, the Court clearly accepted the need for PNR to challenge the threat from terrorism, noting in particular that Article 6 of the Charter (the “right to liberty and security of person”) can justify the processing of personal data. While it accepted that this resulted in systemic transfer of large quantities of people, we see no comments about mass surveillance. Yet, is this not similar to the ‘general and indiscriminate’ collection and analysis rejected by the Court in Tele2/Watson [para 97], and which cannot be seen as automatically justified even in the context of the fight against terrorism [para 103 and 119]? Certainly, the EDPS took the view in its opinion on the EU PNR Directive that “the non-targeted and bulk collection and processing of data of the PNR scheme amount to a measure of general surveillance” [Opinion 1/15, para 63]. It may be that the difference is in the nature of the data; even if this is so, the Court does not make this argument. Indeed, it makes no argument but rather weakly accepts the need for the data. On this point, it should be noted that “the usefulness of large-scale profiling on the basis of passenger data must be questioned thoroughly, based on both scientific elements and recent studies” [Art. 29 WP Opinion 7/2010, p. 4]. In this aspect, Opinion 1/15 is not as strong a stand as Tele2/Watson [c.f para 105-106]; it seems that the Court was less emphatic about significance of surveillance even than the Advocate General [AG 176].
In terms of justification, while the Court accepts that the transfer of data and its analysis may give rise to intrusion, it suggests that the essence of the right has not been affected. In this it follows the approach in the communications data cases. It is unclear, however, what the essence of the right is; it seems that no matter how detailed a picture of an individual can be drawn from the analysis of data, the essence of the right remains intact. If the implication is that where the essence of the right is affected then no justification for the intrusion could be made, a narrow view of essence is understandable. This does not, however, answer the question of what the essence is and, indeed, whether the essence of the right is the same for Article 7 as for Article 8. In this case, the Court has once again referred to both articles, without delineating the boundaries between them, but then proceeded to base its analysis mainly on Article 8.
In terms of relationship between provisions, it is also unclear what the relationship is between Art 8(2) and Art 52. The Court bundles the requirements for these two provisions together but they serve different purposes. Article 8(2) further elaborates the scope of the right; Article 52 deals with the limitations of Charter rights. Despite this, it seems that some of the findings will apply Article 52 in the context of other rights. For example, in considering that an international agreement constitutes law for the purposes of the EUCFR, the Court took a broader approach to meaning of ‘law’ than the Parliament had argued for. This however seems a sensible approach, avoiding undue formality.
One further point about the approach to interpreting exceptions to the rights and Article 52 can be made. It seems that the Court has not followed the Advocate General who had suggested that strict necessity should be understood in the light of achieving a fair balance [AG207].
Some specific points are worth highlighting. The Court held that sensitive data (information that reveals racial or ethnic origin, political opinions, religious or philosophical beliefs, trade-union membership, information about a person’s health or sex life) should not be transferred. It is not clear what interpretation should be given to these data, especially as regards proxies for sensitive data (e.g. food preferences may give rise to inferences about a person’s religious beliefs).
One innovation in the PNR context is the distinction the Court introduced between use of PNR data on entry, use while the traveller is in Canada, and use after the person has left, which perhaps mitigates the Court’s acceptance of undifferentiated surveillance of travellers. The Court’s view of the acceptability of use in relation to this last category is the most stringent. While the Court accepts the link between the processing of PNR data on arrival, after departure the Court expects that link to be proven, and absent such proof, there is no justification for the retention of data. Does this mean that on departure PNR data of persons who are not suspected of terrorism or transnational crime should be deleted at the point of their departure? Such a requirement surely gives rise to practical problems and would seem to limit the Court’s earlier acceptance of the use of general PNR data to verify/update computer models [para 198].
One of the weaknesses of the Court’s case law so far has been a failure to consider investigatory techniques, and whether all are equally acceptable. Here we see the Court beginning to consider the use of automated intelligence techniques. While the Court does not go into detail on all the issues to which predictive policing and big data might give rise, it does note that models must be accurate. It also refers to Article 21 EUCFR (discrimination). In that this section is phrased in general terms, it has potentially wide-reaching application, potentially even beyond the public sector.
The Court’s judgment has further implications as regards the sharing of PNR and other security data with other countries besides Canada, most notably in the context of EU/UK relations after Brexit. Negotiators now have a clearer indication of what it will take for an agreement between the EU and a non-EU state to satisfy the requirements of the Charter, in the ECJ’s view. Time will tell what impact this ruling will have on the progress of those talks.
Barnard & Peers: chapter 25
JHA4: chapter II:9
You can follow Steve’s blog – and get other updates on EU law developments – on Facebook and Twitter.
On Facebook, simply ‘like’ the blog on its dedicated Facebook page.
On Twitter, you can follow the blog editor, Steve Peers.
Marion Oswald Senior Fellow at The Centre for Information Rights, University of Winchester contributes to the blog, examining the British and Irish Law Education and Technology Association (BILETA) consultation run by The Centre of Information Rights. The consultation took place on the 7 June 2017 and concerned the impact of broadcast and social media on the privacy and best interests of young children.
In 1985, in his book ‘Amusing Ourselves to Death’, Professor Neil Postman warned us that:
‘To be unaware that a technology comes equipped with a programme for social change, to maintain that technology is neutral…is…stupidity plain and simple.’
Postman was mainly concerned with the impact of television, with its presentation of news as ‘vaudeville’ and thus its influence on other media to do the same. He was particularly exercised by the co-option by television of ‘serious modes of discourse – news, politics, science, education, commerce, religion’ and the transformation of these into entertainment packages. This ‘trivial pursuit’ information environment, he argued, risked amusing us into indifference and a kind of ‘culture-death.’ Can a culture survive ‘if the value of its news is determined by the number of laughs it provides’?
We appear not to have heeded Postman’s warnings. Many might say they are over-blown. 21st century digital culture continues to emphasise the image, now often combined with ‘bite-sized’ written messages. We have instant 24/7 access to information and rolling news. We are fascinated by reality programming and digital technologies that allow us to scrutinise each other’s lives. Having lived through this digital revolution, I know and appreciate its benefits, especially in relation to the expansion of knowledge and horizons, and to freedom of expression. Like many others, however, I have concerns. What, for instance, would Postman have made of ‘sharenting’; of the ‘YouTube Families’ phenomenon; of the way that younger and younger children now feature on mainstream broadcasts, with public comment via social media using the inevitable hashtag?
It was such concerns that inspired the BILETA consultation workshop held on 7 June 2017 at IALS to discuss the legislative, regulatory and ethical framework surrounding the depiction of young children on digital, online and broadcast media. The full report from the workshop is now available here. As was to be expected, the discussion was wide-ranging with a variety of opinions expressed. The report’s authors have attempted to distil the debate into proposals which we hope will move the debate forward, and generate further discussion and no doubt criticism! (The recommendations represent the views of the report’s authors and do not necessarily represent the views of workshop participants.)
The workshop focused first on a child’s ‘reasonable expectation of privacy’, a concept that was described by one participant as ‘highly artificial and strained’. Why should a child’s privacy depend upon his or her parent’s privacy expectations, it was asked? The report’s authors propose that young children should have a privacy right independent from their parents’ privacy expectations. Such a right could be trumped by other rights or interests, for instance public interest exceptions relating to news and current affairs reporting, journalism and the arts, and the parents’ right to freedom of expression. The report’s authors recommend that there should however be a clearer requirement and process for the child’s interests to be considered alongside the potential benefits of the publication.
Mainstream broadcasters take a variety of approaches to the depiction, and identification, of young children in documentary and science programming. The media should continue to reflect the lives of children and it is in no-one’s interests to have a media where children simply do not appear for fear of the risk of potential harm. Programmes made by highly regulated broadcasters, ensuring wellbeing of children is of paramount importance, can help to set the high ethical watermark in this area for other forms of media to follow. We should continue to monitor the inclusion of young children in ‘Science Entertainment’ broadcasts, however, and the parallel impact of social media. The report’s authors also recommend that more research be undertaken into the impact of broadcast media exposure of young children to understand what effect it has on them, both positive and negative. Once these effects are more fully understood, actions can be taken to reduce any potential harm.
There was some support during the workshop for the view that online intermediaries should take on more responsibility for activities and content that may be harmful to young children. The report’s authors recommend more consistency in terms of compliance and regulation between regulated broadcasters and non-mainstream digital media/social media. This could enhance protection to children in ‘YouTube families’ and other instances where there are no or limited checks on what is being put into the public domain. ‘Controller hosts’ (such as Facebook, YouTube and Twitter) and ‘independent intermediaries’ (such as Google) should have a duty of care to consider young children’s privacy and best interests in their operations. Further research should be undertaken into the potential of image-matching, tracking and content moderation technologies, in controlling the extent to which information and images relating to a young child can be copied, re-contextualised and re-shown in a different context to the original post or publication.
The introduction of a Children’s Digital Ombudsman could provide a way for children’s interests to be better represented in relation to all forms of digital publications. We shouldn’t put all our eggs in the basket of the so-called ‘right to be forgotten’. Before Postman’s warnings become irreversible reality, let’s consider how we want our young children to be treated in the offline world and strive to hold the digital world to the same standards.
 Defined by David Erdos in Erdos, David, ‘Delimiting the Ambit of Responsibility of Intermediary Publishers for Third Party Rights in European Data Protection: Towards a Synthetic Interpretation of the EU acquis’ (June 27, 2017). Available at SSRN: https://ssrn.com/abstract=2993154
On the 17 November 2017, the Information Law & Policy Centre will be holding their 3rd Annual Workshop and Lecture entitled, Children and Digital Rights: Regulating Safeguards and Freedoms. See the Call for Papers.
Readers of the Information and Law Policy Centre blog may be interested in the following event held by Maastricht University.
The academic conference addresses the question as to how surveillance is perceived from the perspective of three main stakeholders involved in the process of surveillance: surveillance authorities, data subjects and companies. The conference tackles precisely this issue. It brings together the perspective of those stakeholders and provides informative insights of academics from both the EU and the US on how these issues interplay in different contexts.Programme 9:30-10:00 Registration 10:00-10:30 Keynote speech:
“The EU’s approach towards surveillance”, Philippe Renaudière, Data Protection Officer, European Commission 10:30-12:00 Panel I: The perspective of the authorities who exercise surveillance 12:00-13:30 Lunch 13:30-15:00 Panel II: The perspective of Individuals subject to surveillance 15:00-15:30 Coffee break 15:30-17:00 Panel III: Means of Surveillance 17:00-17:30 Closing remarks, Giovanni Buttarelli, EDPS 17:30-18:00 Wrap-up 18:00 Network Cocktail
Panel I: The perspective of the authorities who exercise surveillance
Surveillance authorities currently face several challenges, ranging from tackling the consequences of the recent Paris and Brussels terrorist attacks, the issue of an alledged lack of data sharing for security purposes (prevention, detection and investigation of serious crimes, including terrorism), reconciliating the migrant crisis and the challenges it brings along with border protection concerns. On a broader scale, negotiations surrounding the Privacy Shield and the finalisation of the General Data Protection Regulation are on the agenda. Moreover, the enforcement of surveillance-related decisions or judgments leads to increasing constitutionalization of this field. Within the first perspective, the conference will address the following issues:
- Does enhanced surveillance always lead to enhanced security?
- Which other means, apart from surveillance, are available to foster security? In this context, does surveillance necessailry imply bulk data retention? Or could alternative approaches be seized?
- Intelligence cooperation and data exchange between authorities within and outside EU need to find common understading at the EU level on: What is intelligence, who can have access to collected information and for what purposes?
- What impact does the lack of EU compentence on matters of national security have on intelligence sharing within or outside EU?
- The growing demand of reconciliation of security and surveillance with privacy: do new surveillance measures guarantee the respect of both?
- The future of the Privacy Shield: enforcement and relevance for US businesses
- The risks and benefits of surveillance by private controllers (EU and US based)
- Blurring boundaries between surveillance and profiling techniques and the use of profiling in surveillance
- The constitutionalization effect on the field of privacy through surveillance enforcement
Chair and discussant: Francesca Galli, Maastricht University
- Xavier Tracol, Senior Legal Officer at Eurojust: “From prohibiting generalised mass surveillance to permitting targeted retention of both traffic and location data for the purpose of fighting serious crime”
- Christiane Hoehn, Principal Advisor to the EU Counter-Terrorism Coordinator: “The information sharing environment in counter-terrorism: Challenges and perspectives”
- Elif Erdemoglu, Lecturer at The Hague University of Applied Sciences / Researcher at Cybersecurity Center of Expertise, The Hague University of Applied Sciences: “The risks and benefits of surveillance by private controllers (EU and US based)”
- Elspeth Guild, Jean Monnet Professor ad personam at Queen Mary, University of London as well as at the Radboud University Nijmegen, Netherlands
Surveillance often encompasses the general public and is not targeted to particular individuals who are suspects of being involved in serious crime activities, including the preparation of a terrorist attack. Within this perspective, it will be discussed how surveillance should be better regulated in order to achieve its goal most efficiently, whether the expansion of surveillance means is always beneficial to security and what are data subjects’ rights with regard to surveillance. More precisely, the following topics will be addressed:
- The necessity of effective enforcement of data subjects’ rights with regard to surveillance
- How broad should surveillance be – applying only to suspects or general public?
- Do surveillance policies affect certain communities disproportionally and how could this be addressed?
- What are the necessary limitations of surveillance and the relevant criteria in this regard?
- The undefined notions of ‘public security’ and ‘national security’ in EU Treaties but also problematic at national legal contexts?
- The issue of consent in surveillance
Chair and discussant: Maja Brkan, Maastricht University
- Gloria Gonzalez Fuster, Reasearch Professor at LSTS at Vrije Universiteit Brussels: “Who is the data subject: the surveillance perspective”
- Lorna Woods, Chair of Internet Law, School of Law at the University of Essex: “The Investigatory Powers Act: bulk powers under control?”
- Ike Kamphof, Assistant Professor at the Department of Philosophy at Maastricht University: “Securing Privacy. How Homecare Surveillance Shows the Need for a Civil Art next to Rules.”
The third issue addressed will be the perspective of the means of surveillance and the interplay between, on the one hand, legal limitations and possibilities in this regard and, on the other hand, the constant technical development of innovative means of surveillance. Encryption, Privacy by Design and by Default, anonymization, dealing with big and raw data have become a part of constant legal and political debate in Europe and the world. The recent Apple dispute in the US epitomizes the importance of this debate. Therefore, this perspective will address the following issues:
- Legal regulation of technical means of surveillance: do the rigidity of the legal regime and the incapacity to quickly adapt to technical changes prevent more effective surveillance?
- The (un)necessary legal limitations of technical means of surveillance
- The policies of private controllers and the newest technical developments: in the absence of comprehensive legal regime, are private controllers leading the way in regulation?
Chair and discussant: Sergio Carrera, CEPS and Vigjilenca Abazi, Maastricht University
- Rocco Bellanova, Post-doctoral researcher at University of Amsterdam (UVA): “Testing (Surveillance) Devices? Data protection instruments beyond compliance”
- Annika Richterich, Assistant Professor in Digital Culture Literature and Art, Faculty of Arts and Social Sciences at Maastricht University: “Hacking Surveillance: Civic Tech Monitoring as (Data) Activism”
- Anna Dimitrova, Associate Professor in International affairs, Department of International Affairs at ESSCA School of Management, Paris: “Balancing National Security and Data Protection: The Role of EU and US Policy-makers and Courts before and after the NSA Affair”
- Federico Fabbrini, Full Professor of Law at the School of Law & Government of Dublin City University
The event is subsidised by the University Fund Limburg SWOL, the academic association for contemporary European Studies UACES and the Centre for European Policy Studies CEPS.
How has the law adapted to the emergence and proliferation of social media tools and digital technology? Furthermore, how successful has the law been in governing the challenges associated with an ongoing reformulation of our understandings of public and private spaces in the online environment?
These were the key questions discussed by a panel of experts at the Information Law and Policy Centre earlier this month. The event heralded the launch of a new book entitled the ‘Legal Challenges of Social Media’ edited by Dr David Mangan (City Law School) and Dr Lorna Gillies (University of Strathclyde). A number of the book’s authors provided insights into the contents of their individual chapters.
Social Media and Press Regulation
Professor Ian Walden began proceedings with a discussion of his chapter on press regulation. His chapter was informed by his own experience on the board of the Press Complaints Commission (PCC) between 2009 and 2014.
Walden started by addressing the question of what constitutes “press law”. Walden highlighted that for the most part journalists and editors are subject to the same law as most people – there is no special ‘public interest’ defence or journalistic exemption for hacking into the voicemail of a mobile phone user for example. At the same time, journalists abide (to varying degrees) to an Editors’ Code which goes beyond the provisions of the law. In this context, the online environment and social media has rendered press regulation even more complex in a number of ways.
First, a converging media landscape has led to newspapers and magazines such as the Guardian, Financial Times, Elle and OK using video content which the online video regulator, the Association for Television on Demand (ATVOD), ruled was subject to the EU Audiovisual Media Services (AVMS) Directive in 2011. Second, Walden highlighted that social media has provoked questions about how journalists use ‘private’ information posted to sites like Facebook to source and report news stories. Third, the transnational nature of social media has significantly complicated issues around what constitutes ‘publication’ including in the provisions of (super)-injunctions and in cases concerning the ‘right to be forgotten’.
Ultimately, Walden questioned whether the current regulatory regime was sufficient in light of these challenges, but indicated that the prospect of the press submitting to a state regulator remained a distant prospect.
Reforming the Contempt of Court Act 1981
Another central plank of UK law for any working journalist is the Contempt of Court Act 1981. Where previously, however, reporting restrictions contained within the act were almost uniquely relevant to media organisations, social media has now made potential ‘accidental journalists’ of us all. In his chapter, Dr Daithí Mac Síthigh (Newcastle University) highlighted how the judiciary has wrestled with the interpretation of the Contempt of Court Act in light of the fact that jurors now have access both to all manner of information and the means of publication.
Mac Síthigh argued that the former Attorney General, Dominic Grieve’s thinking on the issue has alternated. On the one hand, Grieve has indicated that a few individual Twitter users are not likely to have much influence; it is the mainstream media who must comply with the Act of 1981. On the other hand, Grieve has noted that while the mainstream media do comply – for the most part – with the 1981 Act, various other individuals and jurors publishing online are not respecting the law.
In the last few years, the Law Commission has undertaken significant work on Contempt of Court considering – among others – the question of whether a publisher is liable for prejudicial material posted online, where the material has been published entirely legitimately prior to legal proceedings becoming active. In 2015, a new criminal offence for jurors conducting prohibited research – a Law Commission recommendation – was implemented in section 71 of the Criminal Justice and Courts Act 2015.
Protecting Freedom of Expression Online: Article 10 and Article 8?
Moving away from the law as it relates directly to the media, Professor Lorna Woods (University of Essex) noted that current debate around the internet and social media use often focuses on the interference of the state in Article 10 of the European Convention on Human Rights‘ (ECHR) right to freedom of expression. This is evidently an important issue, but she contended that perhaps we should also be looking at the state’s obligations to actively and positively protect freedom of expression. She suggested that although Article 10 does envisage some positive obligations on the part of the state, it is perhaps not always the best instrument for this purpose. Woods proposed that Article 8 ECHR’s notion of ‘respect for private life’ seems to suggest a “greater level of positive obligation” on behalf of the state than can be found in Article 10 ECHR. Article 8 ECHR is not just about privacy, she argued, it includes the spaces where we interact with society. In this regard, she suggested that rather than “obsessing” over Article 10 ECHR, recourse to Article 8 ECHR in cases concerning social networking sites ought to be a consideration in protecting our human rights. After all, social networking sites and internet-connected technologies are increasingly making more public those spaces which we previously deemed to be entirely private.
A Right to Post-mortem Privacy?
For Edina Harbinja, (University of Hertfordshire) the question of privacy does not end when we die. Her remarks based on her book chapter considered whether a right of privacy beyond death is necessary in light of our everyday social networking habits. Her study of Facebook’s policies for deceased users identifies what she regards as several ‘contradictory’ options for Facebook profiles after death including: memorialisation of the profile which can be requested by family/next of kin; the submission of a request for deactivation or removal of a deceased user; and the relatively recent option (2015) of providing Facebook with a legacy contact who can administer the profile.
Harbinja highlighted that the EU’s General Data Protection Regulation – which the UK is intending to implement regardless of Brexit – does not comprehensively envisage protection of the data of the deceased, but does leave open the possibility of member states making their own provisions. She also suggested that the collection of personal data on platforms such as Facebook demanded legal reforms be considered in the areas of copyright and in legislation containing traditional understandings of property.
Social Media and The Defamation Act 2013
In closing the panel, Lorna Gillies (University of Strathclyde) returned to a theme addressed earlier on by Ian Walden, namely the complications arising from the transnational nature of social media. Gillies’ remarks concerned the Defamation Act 2013 and the UK’s impending exit from the European Union. Under section 1 of the 2013 Act, a claimant must demonstrate ‘serious harm’ to their reputation for it to be actionable in the courts. Moreover, serious harm must be demonstrated in the English court system raising questions as to how English law will interact with European law in the future – particularly in cases where the defendant in any defamation claim is based in another jurisdiction.
Playing ‘catch up’? Reforming the Law in a Social Media Age
Taken together the panellists’ presentations highlighted several key themes. Since its emergence and normalisation, social media has challenged lawmakers and the legal profession by complicating definitional understandings of what we regard as ‘published’ ‘content’ and who we regard as ‘publishing’ ‘media producers’. Social media has also blurred the boundaries of public and private spaces, while creating a globally connected world which traverses both national and supra-national jurisdictions. These trends have rendered some existing legal provisions inadequate, complicated others, and led to completely new legislation. The overriding impression given by the panel was the sense that for the foreseeable future the law will continue to play ‘catch-up’.
Daniel Bennett, Research Assistant, Information Law and Policy Centre
In this guest post, Professor William Webster outlines the objectives of the civil engagement strand of the National Surveillance Camera Strategy. He is the Director at the Centre for Research into Information, Surveillance and Privacy (CRISP), Professor of Public Policy and Management at the University of Stirling, and is leading the civil engagement strand. This post first appeared on the blog of the Surveillance Camera Commissioner, Tony Porter.
It’s often said that the UK is the one of the most surveyed countries in the world with some reports estimating over 6 million CCTV cameras in the UK – surveillance cameras are everywhere and have become a familiar sight on our streets and in shopping centres, schools, hospitals and airports (etc.). For me, one of the pressing issues is whether members of the public, when they see a surveillance camera, know or understand why it is there, who is operating it and what it does. In some cases I suspect these questions cannot be answered. The objective of the civil engagement strand is to make information freely available to the public about the operation of surveillance camera systems.
We know surveillance happens and given the current threat level is severe I think most people expect CCTV surveillance to take place. Whilst surveillance is in use organisations must put the individual at the core of what they do to ensure that they are kept safe – but this must happen without infringing their basic human rights contained with Article 8 of the European Convention on Human Rights or compromising their rights under national and European Data Protection rules.Engaging citizens
In this work strand we want to engage citizens and civil society about the use of surveillance camera systems and associated technologies (such as automatic facial recognition). We want to raise awareness and encourage discussion about the use of such systems. What’s important is to build public awareness and encourage debate about surveillance and how it is conducted on our behalf. Here, the intention is that the better governance of surveillance cameras can only be realised through enhanced public debate about their role in society.
Technology is advancing quickly and we live in a world where body worn video, dash and head cams are increasingly commonplace. Drones are taking off and automatic facial recognition is no longer the stuff of science fiction. As technology advances so does the potential to intrude into the lives of citizens – and as surveillance cameras are computerised and automated it makes it even harder to know what each cameras is doing. So, public trust and support for surveillance needs to be balanced with our needs and expectations for personal protection and privacy, and it is important that the levels and types of surveillance realised through CCTV is delivered in the public interest.Civil engagement
Our civil engagement plan has now been published and it aims to ensure that:
- Citizens have free access to information relating to the operation of surveillance cameras,
- Citizens have a better understanding of their rights in relation to the operation of surveillance cameras,
- Citizens have an understanding of how surveillance cameras function and are used, and
- Organisations have an understanding of the information relating to the operation of surveillance cameras that they should make available to citizens.
I’d be interested to hear what people think about the plan so please let me know by commenting on the blog of the Surveillance Camera Commissioner.
Over the next 3 years we will be working to make sure that civil engagement happens across the strategy but also encouraging organisations to talk to the people their surveillance cameras monitor, to publish information about the systems they use, why they use them and what happens to the personal data they collect.
Look out for some of the events we will be holding as part of the strategy and make sure you sign up for email alerts for Tony’s blog and also follow him on Twitter.
Professor William Webster
Readers of the Information and Law Policy Centre blog may be interested in the following ECREA event.
The Future of Media Content:
Interventions and Industries in the Internet Era
15 – 16 September 2017
The “Communication Law and Policy” and “Media Industries and Cultural Production” Sections of the European Communications Research and Education Association (ECREA) invite you to their 2017 joint workshop on The Future of Content: Interventions and Industries in the Internet Era, hosted by the University of East Anglia’s School of Politics, Philosophy, Language and Communication Studies. This unique opportunity will bring together those investigating the processes of production and distribution with those studying the policy and regulation governing those processes.
Renowned Prof Eli Noam from Columbia University, NY will deliver the keynote address. A keynote panel of industry and policy actors will additionally set the tone for a day and a half of research-based discussions on trends and challenges.
Media and communications industries have changed dramatically over the past decade and both businesses and policy makers are struggling to adapt. Legacy media companies engaged in cultural and news production are trying to change their business models in a manner that will allow them to survive in the face of increased competition for advertising income and the constraints of having a new breed of intermediaries between them and their audiences.
Policy makers are looking beyond the traditional investment in public service broadcasting and content quotas for new interventions and policy mechanisms that might encourage content production and distribution. One of the biggest challenges is defining the landscape of actors, markets and relationships in which content is created and disseminated – from the YouTube star making and reaching millions from a bedroom to the public service broadcaster (PSB) that is now managing big data for its online audience and negotiating with service providers for zero-rating carriage in order to reach its audiences with sufficient speed and stability.
This joint workshop will include panels and cutting edge paper presentations from a broad range of disciplines, interested in the policy, production and business of content and its carriage.
Location: Julian Study Centre, UEA
Friday 15 September
Registration from 10:00
11:00 – 12:30 YECREA session for professional development of young scholars
12:30 – 13:30 Lunch (own organization)
13:30 – 14:30 Keynote sponsored by UEA’s Centre for Competition Policy (CCP) Professor Eli Noam, Columbia University, NY
14:30 – 16:00 Keynote industry and policy stakeholder panel
16:00 – 16:15 Break
16:15 – 17:45 Panel: How media institutions are adapting to the increasingly non-linear, mobile environment
19:00 Conference Dinner at The Library, Norwich
Saturday 16 September
09:30 – 11:00 Panel: The processes and discourses of policy interventions in media
11:00 – 11:15 Break
11:15 – 12:15 Panel: The changing systems for funding quality content
12:15 – 13:15 Panel: The potential of fans in content production industries
13:15 – 14:15 Lunch (provided)
14:15 – 15:15 Panel: Algorithms and Platforms in media markets: new roles between content and consumers
15:15 – 16:15 Panel: Redefining journalism and the public in the new news media environment
16:15 – 17:00 Break
17:00 – 18:30 Panel: From regulating to “chilling”: the application of law to communications and cultural expression
18:30 Closing remarks
Cost: £50 for waged participants and £40 for non-waged participants and those from ECREA recognized “soft-currency” countries. Includes facilities, organisation, all coffee breaks and lunch on Saturday
Optional Conference Dinner: cost to be given upon registration
Please contact email@example.com with any queries
*This programme is designed with the expectation that all accepted papers will be presented, so remains preliminary and subject to change until registration is complete. Panel length varies depending on the number of presenters included, and intending ample time for discussion.
Legal researchers might be interested in the following fellowship opportunity at UK Parliament…
The UK Parliament is currently piloting an academic fellowship scheme that offers academic researchers, from different subject areas and at every stage of their career, the opportunity to work on specific projects from inside Westminster’s walls.
We are now in the second phase of this scheme. This involves an ‘Open call’ which offers academics the opportunity to come and work in Parliament on a project of their own choosing, as long as they can demonstrate that it is relevant, and will contribute, to the work of Parliament.
One area of interest to Parliament is the impact of Parliament on legislation. We are interested in working with academics with knowledge and/or experience in identifying, tracking and assessing impact to help us to understand better, and identify empirically, the influence of MPs and Peers’ scrutiny on legislation.
As a bill passes through parliament, MPs and Peers examine the proposals contained within it at both a general (debating the general principles and themes of the Bill) and detailed level (examining the specific proposals put forward in the bill, line-by-line). More information about the different stages in the passing of a bill is provided on the parliamentary website. In so doing, MPs and Peers debate the key principles and main purpose/s of a bill and flag up any concerns or specific areas where they think amendments (changes) are needed.
We are interested in developing a series of case studies that examine how Peers’ scrutiny of legislation has shaped the focus, content or tone of legislation as it becomes an Act (given Royal Assent). This can include:
- Direct influence, for example an amendment tabled by a Peer is successful and is agreed to by the government and incorporated directly into the bill.
- Indirect influence, for instance when an amendment tabled by a Peer is not successful but the substance of it is subsequently introduced by the government itself (and when the role of the Peer that tabled it in the first instance is not acknowledged).
We envisage that the case studies will look at a government bill scrutinized by the House of Lords and trace the outcome/s of amendments tabled and debated at each stage of the bill’s scrutiny.
The choice of bills to focus on will be decided in conjunction with the academic. This will require the Fellow to:
- understand the intentions of the amendments tabled
- understand how the amendment related to, and interacted with the bill as drafted
- produce an explanation of the outcome in each case
- draft a concise written account of the House’s impact on the bill.
The Scheme is open to academics (researchers with PhDs) employed at any of the 33 universities holding Impact Acceleration Award funding from either the Economic and Social Research Council (ESRC) or the Engineering and Physical Sciences Research Council (EPSRC). There are opportunities for flexible working including both part-time and remote working.
The deadline for submitting an expression of interest to the Scheme is midnight on 4th September 2017.
For more information about the Academic Fellowship Scheme see: http://www.parliament.uk/mps-lords-and-offices/offices/bicameral/post/fellowships/parliamentary-academic-fellowship-scheme/
Submissions to the Law Commission’s consultation on ‘Official Data Protection’: Guardian News and Media
The Law Commission has invited interested parties to write submissions commenting on the proposals outlined in a consultation report on ‘official data protection’. The consultation period closed for submissions on 3 May, although some organisations have been given an extended deadline. (For more detailed background on the Law Commission’s work please see the first post in this series).
The Information Law and Policy Centre is re-publishing some of the submissions written by stakeholders and interested parties in response to the Law Commission’s consultation report (pdf) to our blog. In due course, we will collate the submissions on a single resource page. If you have written a submission for the consultation you would like (re)-published please contact us.
Please note that none of the published submissions reflect the views of the Information Law and Policy Centre which aims to promote and facilitate cross-disciplinary law and policy research, in collaboration with a variety of national and international institutions.
The fourteenth submission in our series is the response submitted by Guardian News and Media. The executive summary outlines that Guardian News and Media is “very concerned that the effect of the measures set out in the consultation paper (‘CP’) would be to make it easier for the government to severely limit the reporting of public interest stories”.
(Previous submissions published in this series: Open Rights Group, CFOI and Article 19, The Courage Foundation, Liberty, Public Concern at Work, The Institute of Employment Rights, Transparency International UK, National Union of Journalists, and English Pen, Reporters Without Borders and Index on Censorship, the Open Government Network, Lorna Woods, Lawrence McNamara and Judith Townend, Global Witness, and the British Computer Society.)