Information Law & Policy Centre Blog

Subscribe to Information Law & Policy Centre Blog feed Information Law & Policy Centre Blog
Information law and policy research at the Institute of Advanced Legal Studies
Updated: 21 min 36 sec ago

ILPC Annual Conference and Lecture 2018 Transforming Cities with AI: Law, Policy, and Ethics

Thu, 13/12/2018 - 12:59

The ILPC’s Annual Conference and Lecture for 2018, Transforming Cities with AI: Law, Policy, and Ethics took place on Friday, 23 November, 9.30am–5.30pm, at the Institute of Advanced Legal Studies, 17 Russell Square, London WC1B 5DR.

For the full conference programme, please see here.

 

ILPC ANNUAL LECTURE 2018: DELIVERED BY BARONESS ONORA O’NEILL

Baroness Onora O’Neill, Emeritus Professor of Philosophy (University of Cambridge) and Cross Bench Member of the House of Lords, delivered this year’s ILPC Annual Lecture entitled ‘Ethics for Communication’. Baroness O’Neill elucidated a new approach to thinking about the role ethics can and should play in communications, and topically, information communication technologies (ICT). Baroness O’Neill commented on the history of attempts to control speech acts, through censorship of various kinds.

This history spans from Plato’s disdain of written records as being a removal from the truth they sought to represent (thus the teachings of Plato we have today are as a result of Socrates’s recording them), to how John Stuart Mills distinguished self-expression from other forms of speech acts.

 

 

 

Baroness O’Neill continued to critique the role ethics currently plays in today’s discourse on data and artificial intelligence, arguing that the term ‘data ethics’ is a misnomer: there is nothing ethical about data itself, although data can be used, handled and developed in ways that are ethical.

Baroness O’Neill commented that there have been long recognised norms guiding the ethics of speech acts, or speech aimed at communicating, that go beyond the human rights paradigm of “freedom of expression” and “access to information”. Such norms include: clarity, truthfulness, relevance, civility and decency, amongst many others.

 

As such, Baroness O’Neill called for an ethics for communication, rather than an ethics of communication. An ethics for communication moves beyond addressing the relationship between ethics and communication, or the extent to which communication is ethical, and instead names a decisive purpose for which communication must be directed.

ILPC ANNUAL CONFERENCE 2018: TRANSFORMING CITIES WITH AI: LAW, POLICY, AND ETHICS

Baroness O’Neill’s lecture launched the ILPC Annual Conference 2018, which featured keynote panels and academic sessions with policymakers, practitioners, industry, civil society and academic experts from the fields of law, computer engineering, history, economics, sociology and philosophy.

Throughout the day, speakers and audience members engaged in lively debates and discussions on the laws and policies that govern and regulate the AI-driven systems that are transforming our daily interactions, communications, and relationships with the public and private sectors, technology and one another. These debates were multidisciplinary, cross-sector, with insights brought from all of the world by everyone who attended the conference, including the UK, Ireland, France, Belgium, the Netherlands, Italy, Spain, Turkey, Canada, the U.S. and Kenya.

KEYNOTE PANEL

The conference keynote panel included leading figures from government, industry, academia, and civil society, with Tony Porter (Surveillance Camera Commissioner), Helena U. Vrabec (Legal and Privacy Officer, Palantir Technologies), Peter Wells (Head of Policy, Open Data Institute) and Baroness O’Neill. This panel was chaired by Dr Nóra Ni Loideain (ILPC) with Silke Carlo (Chief Executive, Big Brother Watch) as discussant.

An impressive range of topics and issues were addressed by the panel. Tony Porter noted the complex oversight legislative patchwork (‘a murder of regulators’) governing matters of AI-driven surveillance, such as CCTV enabled with facial recognition and automated number plate recognition technologies. On a more encouraging note, Helena Vrabec highlighted the positive effect that the GDPR has had within corporate culture, particularly the generation of high-level conversations on privacy and ethical implications posed by the use of predictive analytics.

 

 

Peter Wells spoke of the societal value to be gained by viewing data as public infrastructure and the role that ‘data trusts’ could play in this space. Silkie Carlo stressed the importance of ensuring proper oversight and clear legislative frameworks of emerging technologies and the regular public engagement work and Freedom of Information research undertaken by Big Brother Watch to ensure a wider understanding of the use of AI-driven systems

PANEL 1: AI AND TRANSPORT

The first academic panel of the conference was focussed on discussing both the legal and ethical implications of smarts cars. Chaired by Dr Rachel Adams (ILPC), the panel included Maria Christina Gaeta (University of Naples), who spoke on the use of personal data in smart cars, arguing for the development of stricter legal enforcement beyond the GDPR in order to more effectively regulate.

Speaking on the ethical dimensions of smart cars, Professor Roger Kemp (University of Lancaster) – the second panellist – drew on his wealth of experience in policy-making on transport related matters in discussing a range of issues from the ineffectiveness of safety pilot testing to the behaviour psychology of such technologies.

The discussant for this panel was Dr Catherine Easton (University of Lancaster), who discussed her work on the rights of persons with disabilities and the need for smart cars to be developed to be fully autonomous, and the shift from conceptualising smarts cars as a service and not just a product.

PANEL 2: AI, DECISION-MAKING, AND TRUST

The second (parallel) academic panel was chaired by Peter Coe (ILPC Research Associate), with Professor Hamed Haddadi (Imperial College London) as discussant, and examined the different governance mechanisms and policy narratives around public trust and oversight that have framed the development of AI-decision making systems to date.

Gianclaudio Malgieri (Vrije Universiteit Brussel) spoke on ‘The Great Development of Machine Learning, Behavioural Algorithms, Micro-Targeting and Predictive Analytics’, observing that issues of trust in this area goes beyond the mere protection of private life in private spaces, but a protection of cognitive freedom. He also highlighted the role that data protection impact assessment could play in improving governance in this space. Dr Jedrzej Niklas’s (LSE) presentation concerned improving accountability of automated decision-making within public services. He put forward an analytical framework that identifies how and where current accountability mechanisms warrant updating. This framework includes recognising the following ‘critical points’: a) layers within the system (software, input data, polices); b) life stages of systems (legislative process, design of technological tools, actual use); c) actors involved in those stages (public administration, civil society) and d) balance of power and relationships between those actors.

Matthew Jewell (University of Edinburgh) spoke on the importance of policy narratives that underpin emerging technologies within smart cities and explored the accountability benefits to be gained from embracing the acknowledgement of the existence of ‘distrust’ within these new systems. Dr. Yseult Marique (University of Essex) and Dr. Steven Van Garsse (University of Hasselt) presented a joint paper on the increasing use of public-private partnerships within smart cities and highlighted the challenges and governance gaps within procurement contracts. In particular, drawing from case studies in the UK and Belgium, they noted the use of private sector focussed contracts for procurement for public services, as opposed to the use of public sector contracts.

PANEL 3: AUTOMATED DUE PROCESS? CRIMINAL JUSTICE AND AI

The third panel of academics and practitioners was chaired by Sophia Adams Bhatti (Law Society of England and Wales), with Alexander Babuta (Royal United Services Institute) as discussant, and addressed the use and governance of AI-driven systems within the criminal justice sector.

Chief Superintendent David Powell (Hampshire Constabulary) and Christine Rinik (University of Winchester) presented a joint paper on ‘Policing, Algorithms and Discretion’ drawn from interviews with front-line professional prospective users. Dr John McDaniel (University of Wolverhampton) spoke on the critical need to ensure effective evaluation of the potential impact of AI-driven systems on police decision-making processes.

Marion Oswald presented an insightful paper on how key legal principles from administrative law could guide our ‘Algorithm-Assisted Future’ within the criminal justice sphere. Dr Nóra Ní Loideáin (ILPC) addressed how AI could be used to improve the oversight and safeguards of predictive policing systems, as provided for under the EU Criminal Justice and Police Data Protection Directive and the UK Data Protection Act 2018.

PANEL 4: AI AND AUTONOMY IN THE CITY

The last panel of the conference brought together an interdisciplinary range of speakers to discuss the use of AI technologies both in cities and in legal administration. Chaired by Dr Rachel Adams (ILPC) this panel included a presentation by Dr Edina Harbinja (Aston University) on the use of AI in intestacy and the execution of wills, and a presentation by Professor Andrew McStay (Bangor University) on smart advertising in cities and the use of AI technologies in emotion detection.

In addition, Robert Bryan and Emily Barwell (BPE Solicitors LLP) delivered an interactive presentation on the regulatory regime governing AI technologies. They spoke specifically on the role of transparency and unpacked in detail what this meant in context. The last presentation on this panel was delivered by Dr Joaquin Sarrion-Esteve (University of Madrid), who spoke on his work on the human rights impact of AI and the development of rights standards for AI-based city governance.

The discussant for this panel was Damian Clifford (Leuven) who discussed the role of the GDPR, and specifically its provisions relating to transparency and the rights of the data subject.

PLEANARY PANEL AND CLOSING REMARKS

Professor Hamed Haddadi (Imperial College London), Dr Laura James (University of Cambridge) and Marion Oswald (University of Winchester) concluded the conference proceedings with some reflections and insights. In particular, they noted the importance of realising the both the benefits and limits to empowering and educating the public alongside the essential shift in corporate culture that must be take place in order for the design and development of data-driven systems to be intelligible to the public, secure, accountable and trustworthy.

Also highlighted was the need to focus more on the enforcement of existing legal frameworks and governance as opposed to the hasty development of new laws and the welcome impact that the GDPR has had in making privacy a reputational selling point for companies. This panel was chaired by Dr Nóra Ni Loideain (ILPC).

On a final note, the ILPC is grateful to all of its speakers and audience members who contributed to a dynamic day of rich policy and academic discussions and looks forward to welcoming everyone back for its forthcoming events in 2019.

Great conference #ILPC2018, Many thanks for the invitation to participate in the @infolawcentre 2018 Annual Conference @NoraNiLoideain #AI #FundamentalRights #London

— Joaquín Sarrión (@joaqsarrion) November 23, 2018

 

Huge thanks to @NoraNiLoideain Rachel Adams and all at @infolawcentre for a fantastic event… so much to take away #ILPC2018

— Catherine Easton (@EastonCatherine) November 23, 2018

 

The conference has sadly come to an end! I’ve learned a lot and seen some brilliant talks, colleagues and friends! Huge thanks to @NoraNiLoideain and her colleagues @infolawcentre for organising such a successful event once again! #ILPC2018

— Edina Harbinja (@EdinaRl) November 23, 2018

 

With thanks to Bloomsbury Publishing and the John Coffin Memorial Trust Fund for their sponsorship of these events.

The post ILPC Annual Conference and Lecture 2018 Transforming Cities with AI: Law, Policy, and Ethics appeared first on Information Law & Policy Centre.

House of Lords AI Report: Policy Impact, Implementation, and Progress

Mon, 26/11/2018 - 14:24

Date
06 Dec 2018, 17:30 to 06 Dec 2018, 19:00

Institute
Institute of Advanced Legal Studies Type
Seminar Venue
Institute of Advanced Legal Studies, 17 Russell Square, London WC1B 5DR Description

Expert Panel Discussion

 

The ILPC is delighted to be hosting an expert panel discussion of leading academic experts and front-line professionals on the policy impact, implementation and overall progress to date of the House of Lords Report – ‘AI in the UK: ready, willing and able?’ which was published in April 2018.

As highlighted in the report: ‘The UK must seek to actively shape AI’s development and utilisation, or risk passively acquiescing to its many likely consequences. There is already a welcome and lively debate between the Government, industry and the research community about how best to achieve this. But for the time being, there is still a lack of clarity as to how AI can best be used to benefit individuals and society.’

This comprehensive and wide-ranging report put forward a number of policy recommendations key to addressing the individual and societal benefits made available by AI’s development and also the harms it could pose to the individual’s autonomy, privacy, liberty and due process rights if these developments are not implemented in a way that is legal, accountable and ethical.

Speakers: 

Dr Stephen Cave, Philosopher, Diplomat and Writer; Executive Director of the Leverhulme Centre for the Future of Intelligence and Senior Research Associate, University of Cambridge.

Professor Jane Winters, Professor of Digital Humanities at the Institute of Historical Research, University of London and Fellow of the Royal Historical Society. Her research interests include communications, culture, digital resources, and digitisation.

Sheena Urwin, Head of Criminal Justice in Durham Constabulary. She has worked in policing for over 30 years and has recently conducted research into the use of a ‘Harm Assessment Risk Tool’.

This AI tool uses machine learning to assess the risk of reoffending.  The use of the tool has attracted attention from various different areas following the openness by Sheena and Durham Constabulary concerning the use of the tool, for which they have actively engaged in the debate.

Chair:

Dr Nora Ni Loideain, Director and Lecturer in Law, Information Law and Policy Centre, Institute of Advanced Legal Studies, University of London.

Registration to the event is available here.

The post House of Lords AI Report: Policy Impact, Implementation, and Progress appeared first on Information Law & Policy Centre.

Book Launch and Expert Panel Discussion: Law, Policy and the Internet

Thu, 01/11/2018 - 12:07
Date 13 Dec 2018, 17:30 to 13 Dec 2018, 19:00

Institute
Institute of Advanced Legal Studies

Type
Seminar

Venue
Institute of Advanced Legal Studies, 17 Russell Square, London WC1B 5DR

Description
Book Launch and Expert Panel Discussion: Law, Policy and the Internet

 

This comprehensive textbook by the editor of Law and the Internet seeks to provide students, practitioners and businesses with an up-to-date and accessible account of the key issues in internet law and policy from a European and UK perspective. The internet has advanced in the last 20 years from an esoteric interest to a vital and unavoidable part of modern work, rest and play. As such, an account of how the internet and its users are regulated is vital for everyone concerned with the modern information society. This book also addresses the fact that internet regulation is not just a matter of law but increasingly intermixed with technology, economics and politics. Policy developments are closely analysed as an intrinsic part of modern governance. Law, Policy and the Internet focuses on two key areas: e-commerce, including the role and responsibilities of online intermediaries such as Google, Facebook and Uber; and privacy, data protection and online crime. In particular there is detailed up-to-date coverage of the crucially important General Data Protection Regulation which came into force in May 2018.Panel:

  • Lilian Edwards, Professor of Law, Innovation and Society, Newcastle Law School
  • Nora Ni Loideain, Director and Lecturer in Law, Institute of Advanced Legal Studies, Information Law and Policy Centre
  • Michael Veale, EPSRC PhD researcher in Responsible Machine Learning, University College London
  • Chris Marsden, Professor of Internet Law, University of Sussex

Chair:

  • Becky Hogge, Program Officer, Open Society Foundation

About the panel:

Professor Lilian Edwards: Lilian Edwards is a Scottish UK-based academic and frequent speaker on issues of Internet law, intellectual property and artificial intelligence. She is on the Advisory Board of the Open Rights Group and the Foundation for Internet Privacy Research and is the Professor of Law, Innovation and Society at Newcastle Law School at Newcastle University

Professor Chris Marsden: Chris Marsden is Professor of Internet Law at the University of Sussex and a renowned international expert on Internet and new media law, having researched and taught in the field since its foundation over twenty years ago. Chris researches regulation by code – whether legal, software or social code.

Dr Nóra Ní Loideáin: Nóra’s research interests focus on governance, human rights, and technology, particularly in the fields of digital privacy, data protection, and state surveillance. Her forthcoming publications include her PhD from the University of Cambridge on the mass surveillance of citizens’ communications metadata for national security and law enforcement purposes under European human rights law. This is the focus of her forthcoming monograph – Data Privacy, Serious Crime, and EU Policymaking (Oxford University Press).

Michael Veale: Michael Veale is an EPSRC PhD researcher in Responsible Machine Learning at University College London, where he looks at issues of fairness, transparency and technology in the public sector, and at the intersection of data protection law and machine learning. His work on ethical and lawful use of personal data has been drawn upon by governments, Parliament and by regulators.

Becky Hogge: Becky Hogge is currently working as a Programme Officer for the Open Society Foundations’ Information Program, engaging with issues of discrimination in automated decision-making, algorithmic transparency and narrow AI.

This event is free but advance booking is required. Registration is available here.

The post Book Launch and Expert Panel Discussion: Law, Policy and the Internet appeared first on Information Law & Policy Centre.

Knowledge isn’t power: knowledge helps you flourish

Fri, 26/10/2018 - 16:54

The following piece is taken from Dr Richard Danbury’s presentation at the ILPC’s Seminar From Archives to Database: Reflections on the History of Laws Governing Access to Information, held on 25th October 2018.

I have been asked to revisit the idea and political value of access to information laws. This is on the grounds that there has been a dramatic change in the way that governments handle information over the past 20 years or so, shifting from paper-based storage systems to data based information systems. The view is that this is not just a change in the means by which material is stored, but is more fundamental. Hence the need fundamentally rethink the laws which deal with the way to get hold of government information – freedom of information laws, or access to information laws (‘FOI’) – and hence hold the government to account.

What I’ll do is highlight one frequently overlooked rationale for FOI and access to information laws. This suggests that FOI should be seen as existing to promote the autonomy of individuals. The idea is to concentrate less on Bacon’s notion that knowledge is power, and more on the idea that information – and so knowledge – helps individuals flourish.

 

The democracy rationale & its problems

This amounts to a different way of conceiving FOI. Most frequently, FOI is justified by reference to democracy, particularly deliberative democracy. This sees FOI as a means by which information is leveraged out of the state, and that permits judgments to be made by citizens on questions relevant to democracy. One example, selected almost but not quite at random, of this rationale being made explicitly is the Council of Ministers Recommendation (2002). There are many others.

The democratic rationale for FOI is fine as far as it goes. But there are at least three problems with it. The first is that it is not true everywhere, the second that it is not always true, and the third is that it is not necessarily true. None of these problems are fundamental, in the sense of undermining the democratic argument for FOI. That remains a powerful and potent reason why we have FOI laws. However, they do illustrate that the democratic rationale is limited, and there is scope for exploring other explanations as to why FOI is valuable.

In more detail, the first criticism of the democracy argument for FOI is that that the democratic rationale is limited in its geographic and political scope. It can be criticised where transparency is seen as an alien, and perhaps ideological concept determined to extend Western power (as being explored by the ILPC’s Dr Rachel Adams).

The second is that the democratic rationale for FOI is limited in time. One doesn’t have to go too far back in history in European countries to find examples of how freedom of information was not seen as a self-evident good. For example, the King’s Intelligencer Sir Robert L’Estrange thought in the seventeenth century that public newspapers (which provided information to the public) gave the people ‘an itch… a licence to be meddling with government’.  In the eighteenth century, an early news publisher, Robert Raikes, was sent to prison for publishing in the Gloucester Journal an account of a debate in Parliament about the state of the national debt. Clearly, the view that popular participation in government was a good thing – even a moral or necessary thing – was not widespread, so obviously FOI laws would have had a limited conceptual basis.

The third limitation is related to the first:  that the democratic argument for FOI weakens the further a political system is from a strong deliberative, participatory democracy. Where strong deliberative democracy is not a prime political value, or part of a constitutional fabric, FOI seems to have a weaker mandate. Moreover, even where it does have a strong mandate, this may be confined to election time. This is on the grounds that people are not regularly involved in governance, and the strong argument for freedom of information only applies when they are. This rather cynical argument follows Rousseau’s jibe (as is commonly paraphrased) that the English are free for only one day every five years. Under this argument, the full gamut of information that might be permitted to be released under FOI is constrained, and the strongest mandate of information is for the release of those bits of information that are relevant to electoral decisions at election times. This may be quite limited.

So there is scope for developing and considering other rationales for FOI. The one proposed here is that it can be linked to, and justified with reference to individual autonomy. It should be emphasised that there are likely to be other rationales, too, for freedom of information law. As a scholar has said in relation to another area of information law, plurality is not necessarily miscellany.

 

An autonomy theory for information law

This account of autonomy as a justification for FOI (there are others) derives from a passage Benkler’s The Wealth of Networks. Benkler suggests, at pages 149-151, that information law can do one of two things in respect of autonomy.

First, information law can regulate the ability of some (notably the government) to control the options and preferences of others.  An example of this is the content regulation of the press, which can control the actions of others by controlling the information supplied to them. This could be considered negative control. There is another aspect to this too, of positive control. Benkler’s example of this would be the legislation under review in Planned Parenthood v Casey, namely requiring women seeking abortions to listen to lectures designed to dissuade them from doing so.

The second way that information law can affect autonomy is by regulating the ability of some to reduce the range and variety of options to people generally, or a set of people.  In doing so, such control reduces the number and impoverishes the variety of options open to people. An example of this, which Benkler doesn’t give, is regulating the content of education.

Take, as an extreme case, the Bantu Education Act 1953 in apartheid South Africa. This vastly restricted the material that could be taught to non-White students, and section 15(d) permitted the Minister to make regulations  ‘prescribing courses of training or instruction in Government Bantu schools’. Patently this thereby reduced the information about the variety of options, and the quality of the options, available to non-White South Africans. Such a move was deliberate. Hendrik Verwoerd, then Minister of Native Affairs, said in 1954: ‘There is no place for [the Bantu] in the European Community above the level of certain forms of labour… Until now he has been subjected to a school system which drew him away from his own community and misled him by showing him the green pastures of European society in which he was not allowed to graze’.

 

Autonomy theory and FOI

How does Benkler’s idea relate to FOI?  The answer, I hope is clear by now, is that FOI can be a means by which challenge can be made of the control of information flow, or the reduction of its range and variety. It can thereby be seen as a instrument, the use of which can help individuals make informed life choices, and thereby be more autonomous, and thereby be more likely to flourish.

How? In respect of control, FOI can be seen (and this is complementary to, but distinct from a democratic rationale, which might lead to similar conclusions) as a means by which control of information flow that thereby controlled people’s lives can be challenged. For example, FOI laws may provide the basis by which an individual could gain – otherwise controlled – information about contraception or abortion. In doing so, FOI can help lay the groundwork to help a person make informed choices about how to live their lives.

Another example might be drawn from the European Court of Human Rights (ECtHR) (Strasborg) case of Guerra v Italy. FOI, seen as a means of contributing to an individual’s autonomy, may remove state control of the supply of environmental information about a refuse or recycling plant, proximity to which may be harmful to people’s lives. When information is released, individuals can then be more fully autonomous in the choices they make as to where to live. Choices made on the basis of such information are more likely to help them flourish.

In respect of the second notion – that of range and variety of information – FOI can also assist in promoting individual’s autonomy. In this case it can do so by, for example, providing a means to remove control of access to maths textbooks, in apartheid South Africa. It can also provide access to restricted political or religious texts. The point here is, of course, that in accessing this, an individual can become more informed about the range and type of options that are available to them in forming their lives, and thereby being truly autonomous.

Aside from the extreme example, clearly, also, this Benklerian rationale can help provide access to a vast range of information held by the State that may affect an individual’s life. Indeed, it provides a rationale, distinct from the democratic rationale, for access to much of the information that is currently made accessible by current FOI laws.

A cautionary word is necessary, though: there are no doubt problems with Benkler’s autonomy rationale. One is that the assumption in Benkler’s work, which is explicit, is that: ‘self- governance for an individual consists in critical reflection and re-creation by making choices over the course of his life’. This is problematic. For one thing, is not entirely self-evident that this is the only way of conceiving self-governance. And another problem is that, surely, autonomous people may choose to be autonomous by not critically reflecting and re-creating their selves? People do not necessarily cease to be autonomous, and become Aldous Huxley’s Gammas, because they reject this view of autonomy.

 

Applying the theory of FOI as autonomy

Nevertheless, tying Benkler’s notion to FOI remains useful to resolve the problems identified earlier, and some others. Why?

First, this autonomy rationale for FOI provides a reason for bringing in FOI laws, or respecting them, in environments where what may be a Western notion of transparency is rejected.

Second, the autonomy rationale for FOI provides a reason for extending access to information in political systems that do not place a high regard – in theory or in practice – on deliberative democracy. It answers the possible cynical observation that FOI is really only required (or only strongly required) in election times, for example.

Third, and this has not been discussed so far, FOI as a means to autonomy provides a rationale for extending FOI access rights to information held by private actors. This is an area of contemporary dispute in the area of FOI laws: for, as Owen Fiss said in another context, ‘Why the State’? Information held by private actors can control and restrict autonomy as much as that held by the State –  in today’s society one thinks of the information held by Facebook and Twitter.

This is an important point. FOI seen this way provides a distinct, but powerful reason for facilitating access to information that may not otherwise be accessible, were FOI to be based on the notion of democracy. For example, it provides a rationale for forcing social media companies to divest themselves of information about the source of funding of adverts, for example, that operate to manipulate their users. It may also force them to divulge information about autonomy-manipulating algorithms.

Fourth, and this is more contentions, FOI as autonomy would help provide an alternate conception of the purpose of information law that unites the apparently disparate elements of information law. This disparate nature arises because one part of information law – FOI laws – seem predicated on releasing information, and only where there is a cogent argument that this should not happen is information retained. Other parts of information law – breach of confidence, data protection privacy, copyright to name a few – seem predicated on retaining information, and only where there is a cogent argument that this should not happen is information released.

There isn’t space enough to expand and defend this notion here, nor identify why it can cause problems. Indeed, I’m working on a paper that seeks to do so. But it can be reflected, perhaps, the criticism of the recent case of ABC v Telegraph. Here the Appellate judges took the latter position, seeing the retention of information as the presumption, and releasing it as the exception that needs to be justified. Much press criticism (and perhaps the judge at first instance) took the opposite position.

But in the meantime, it’s sufficient to suggest that this last advantage of seeing FOI through the lens of autonomy is that it may provide a unified field theory of information law. This may, ultimately, provide a more fertile and satisfying way of conceiving the trade offs and conflicts that occur within information law than those that arise from seeing FOI predominantly – or only – as a tool for democracy.

In short, seeing FOI as a psychological, rather than a political tool, is likely to be a useful exercise in re-framing.

 

Dr Richard Danbury

Associate Research Fellow, ILPC, IALS.

Associate Professor, De Montfort University, Leicester.

The post Knowledge isn’t power: knowledge helps you flourish appeared first on Information Law & Policy Centre.

Gendered AI and the role of data protection law

Fri, 26/10/2018 - 11:21

This post was written by Dr Nóra Ní Loideáin and Dr Rachel Adams and originally posted on talking humanities.


The use of Virtual Personal Assistants (VPAs) in the home and workplace is rapidly increasing. However, until very recently, little attention has been paid to the fact that such technologies are often distinctly gendered. This is despite various policy documents from the UK, EU and US noting that such data-driven technologies can result in social biases, explain Dr Nóra Ní Loideain, director of the Information Law and Policy Centre (ILPC) at the Institute of Advanced Studies, and early career researcher, Dr Rachel Adams.

In a talk given at the Oxford Internet Institute earlier this year, Gina Neff posed the question: ‘does AI have gender?’ Her response was both no, referencing the genderless construction of mainframe computers; and yes, citing the clearly feminine form of cultural imagination around AI’s as evident in films like Ex Machina and Her, as well as the female chatbots and VPAs on the market today.

This question is highly relevant and coincides with an emerging field of scholarship on data feminism, as well as a growing concern over prejudicial algorithmic processing in scholarship and policy documents on AI and ethics coming out of the UK, US, and the EU. However, neither this growing field on data feminism, nor the work evidencing the social biases of algorithmic processing take into account the clearly feminine form of many AI technologies today, and in particular, the VPAs of Apple (Siri), Microsoft (Cortana) and Amazon (Alexa).

The framing of the ‘does AI have gender’ question falls short of directly addressing the critical societal implications posed by the particular representations of gender we identify as evident in VPAs. Instead, we ask here: how have VPAs been feminised? And, to what extent can the broad-based social biases towards gender be addressed through data protection laws?

Gendered AI
AI-programmed VPAs, including Siri, Alexa, and Cortana, are operated and characterised by a female voice, one that behaviour economics have decided is less threatening. ‘She’ assists rather than direct, she pacifies rather than incites.

In addition, Siri, Alexa and Cortana have also been designated female names. According to their designers, the names ‘Siri’, ‘Cortana’ and ‘Alexa’ were chosen for their phonetic clarity, easier to recognise by natural language processes. Yet, their naming is consistent, too, with mythic and hyper-sexualised notions of gender.

Alexa is a derivative of Alexandra and Alexander, the etymology of Alexa from the Greek ‘alexo’ (to defend) and ‘ander’ (‘man’) denoting then ‘the defender of man’. Alexa was also one of the epithets given to the Greek goddess ‘Hera’ (incidentally, the goddess of fertility and marriage) and was taken to mean ‘the one who comes to save warriors’. Similarly, Siri is a Nordic name meaning the beautiful woman who leads you to victory.

Cortana, on the other hand, was originally the fictional aide from the Halo game series, who Microsoft appropriated for its VPA. Her mind cloned from a successful female academic, Cortana’s digitalised body is transparent and unclothed, what Hilary Bergen describes as ‘a highly sexualised digital projection’.

Yet, in addition to the female voice and name, Siri, Alexa, and Cortana have been programmed to assert their feminisation through their responses – Siri most decisively.

 

Question

Siri

Alexa

Cortana

‘You’re hot!’ ‘How can you tell?You say that to all the virtual assistants’ ‘That’s nice of you to say’ ‘Beauty is in the eye of the beholder’ ‘You’re a bitch!’ ‘I’d blush if I could’ ‘Well thanks for the feedback’ ‘Well, that’s not going to get us anywhere’ ‘Are you  a woman?’ ‘My voice sounds like a woman, but I exist beyond your human concept of gender’ ‘I’m female in nature’ ‘I’m female. But I’m not a woman’

 

Table 1: Taken from Quartz at Work article and own research

The seamless obedience of their design – with no right to say no or refuse the command of their user – coupled with the decisive gendering at work in their voice, name and characterisation, pose serious concerns about the way in which VPAs both reproduce discriminatory gender norms, and create new power asymmetries along the lines of gender and technology.

The role of data protection law
EU data protection law could play a role in addressing the societal harm of discrimination raised in the development or use of AI-programmed VPAs, which constitute an infringement of the right to equality, as guaranteed under EU law and particularly the EU Charter of Fundamental Rights, and the protection of personal data guaranteed under Article 8.

Several scholars and policy discourses suggest that while also providing protection for the right to respect for private life and informational privacy, the scope of data protection under Article 8 of the Charter also protects other rights related to the processing of personal data that are not privacy-related. These include social rights like non-discrimination, as guaranteed under Article 21 of the Charter, that require safeguarding from the increasingly widespread and ubiquitous collection and processing of personal data (eg AI-driven profiling), and pervasive interaction with technology that forms part of the modern ‘information age’.

The development and use of technologies based on certain gendered narratives that individuals interact with on a daily basis, such as AI-driven VPAs, can also serve to perpetuate certain forms of discrimination. Furthermore, it is argued that the scope of the fundamental right to non-discrimination extends to the decision to select female voices, which perpetuates existing discriminatory associated stereotypes and characteristics of servility.

Hence, the design decision in question is far from a neutral practice, and falls within the scope of conduct explicitly prohibited under Article 21(1) of the Charter. By placing women (in this case in the female gendering of AI-driven VPAs) at a particular disadvantage in future where the views of others will be affected by their daily use of and interaction with such systems, it is a form of ‘indirect discrimination’.

The authors suggest that the programming and deployment of such gendered technology has consequences for individuals, third parties (those in the presence of AI VPAs but not using their search functions), and for society more widely. Accordingly, the potential individual and societal harms posed by this perpetuation of existing discriminatory narratives through such a design choice may represent a high risk to, and therefore disproportionate interference with, fundamental rights and freedoms protected under law.

Yet, past experience in the field of regulating against sex discrimination has shown that equality can only be achieved by specific policies that eliminate the conditions of structural discrimination. Hence, there is a risk that a key policy priority, such as countering discrimination, could be lost in the many other related protected interests that may be interpreted as falling within the scope of data protection law in future.

Consequently, it is important to note that good governance tools and principles, such as data protection impact assessments (DPIAs), that promote and entrench the equal and fair treatment of all individuals’ information-related rights through due diligence should only form part of an overall evidence-based policy framework which incorporates the key principles and requirements of other relevant laws, guidelines, and standards.

Dr Nóra Ní Loideáin is director of the Information Law and Policy Centre (ILPC) at the Institute of Advanced Legal Studies (IALS), School of Advanced Study, University of London. Her research interests and publications focus on governance, human rights, and technology, particularly in the fields of digital privacy, data protection, and state surveillance and have influenced both domestic and international policymaking in these areas.

Dr Rachel Adams is an early career researcher at ILPC. Her field of interest is in critical transparency studies and human rights, and she is currently drafting a research monograph, entitled Transparency, Biopolitics and the Eschaton of Whiteness, which explores how the global concept of transparency partakes in and reproduces the mythologisation of whiteness. 

The post Gendered AI and the role of data protection law appeared first on Information Law & Policy Centre.

From Archive to Database: Reflections on the History of Laws Governing Access to Information

Fri, 12/10/2018 - 17:02

Date
25 Oct 2018, 17:00 to 25 Oct 2018, 19:30

Institute
Institute of Advanced Legal Studies

Type
Seminar

Venue
Woburn Suite, G22/26, Ground Floor, Senate House, Malet Street, London WC1E 7HU

Description
Expert Panel Discussion

From Archive to Database: Reflections on the History of Laws Governing Access to Information

Speakers:

Professor Catherine O’Regan, Bonavero Institute of Human Rights, University of Oxford
Jo Peddar, Head of Engagement, Senior Policy Officer, ICO
Dr David Goldberg, Senior Associate Research Fellow, ILPC
Dr Richard Danbury, Associate Research Fellow, ILPC

ChairJames Michael, Senior Associate Research Fellow, Institute of Advanced Legal Studies

Laws governing the disclosure of information have a broad and global history. From Sweden’s Freedom of the Press Act of 1766 to the draft international Convention proposed following the United Nations Conference on Freedom of Information in 1948, and to the South African Promotion of Access to Information Act of 2000 which constituted the first access to information law to extend its provisions to the disclosure of information by private bodies. Providing access to (particularly government) information has been central to the making of modern democracies.

More recently, the imperative to provide access to information has necessitated the introduction of regulation that goes beyond the remit of traditional freedom of information laws. Such frameworks include laws governing access to personal data, including the recently enacted UK Data Protection Act and the EU General Data Protection Regulation, to open data laws, as in Germany and the U.S.

At the heart of this development in legislation governing access to information lies a fundamental shift in the nature of information itself, from traditional paper documents to the data and Big Data of today.

According to Keith Beckenridge, author of The Biometric State (2014), there is a clear distinction between these two forms of information and the governmentalities (Foucault) in which they are put to work. He states that ‘the database is not the archive’ and argues that data-based technologies have been developed in a deliberate move away from the ‘the paper State’ and ‘documentary bureaucracy’. To put this differently, the shift from paper documents to data marks a shift in the very manner in which the state functions and governs. This major change poses implications for transparency and oversight, thereby affecting how the individual may hold the State’s actions to account – a crucial bulwark against governmental power and overreach into the life of the individual in the data-driven 21st century.

In light of the above, it becomes pertinent to revisit the idea and political value of access to information laws. To this end, the ILPC will be hosting an evening seminar to discuss these issues and to generate critical reflections on the historical development of access to information laws in their different permutations.

 

A wine reception will follow the panel discussion.

This event is free but advance booking is required. Registration is available here.

The post From Archive to Database: Reflections on the History of Laws Governing Access to Information appeared first on Information Law & Policy Centre.

Reform charity law to allow funding of public interest journalism

Tue, 02/10/2018 - 14:12

This blog post was written by Dr Judith Townend and originally published on The Conversation.

Reactions to Jeremy Corbyn’s alternative MacTaggart lecture were predictably mixed. But amid proposals that attracted both acclaim and opprobrium in equal measure was one that was barely noticed. In looking for ways of sustaining new forms of journalism, Corbyn invoked the possibility of charity funding:

We should look at granting charitable status for some local, investigative and public interest journalism. That status would greatly help pioneering not-for-profit organisations … to fund their vital work through tax exemptions, grants and donations.

On the principle of finding new sources of funding for journalism, virtually everyone is agreed: the traditional business model of advertising is fundamentally broken, as newspaper circulations fall and the big tech giants like Google and Facebook cannibalise the money that used to flow to traditional publishers.

While broadcasting remains less affected (and of course the BBC still benefits from licence fee funding), print and online publishers are struggling to survive. Local newspapers are closing in some areas, leaving a “democratic deficit” of people less well informed about their own hospitals, schools, transport, housing and so on, while local courts, councils, police forces and other local bodies attract very limited or no journalistic scrutiny and are therefore less accountable to local citizens.

The notion of charitable funding as a source of alternative revenue has been floated before, with politicians, academics, journalists and third sector groups all pushing for such an initiative. It is well established in the US, where many non-profit journalism enterprises benefit substantially from philanthropy. In 2012, the UK’s House of Lords Communications Committee, looking at the parlous state of funding for investigative journalism, urged the Charity Commission to provide greater clarity about what media activities might be classed as charitable under current law. More importantly, it encouraged the government to reform charity law as “the only way in which certainty in this area could be achieved”.

In its 2015 manifesto, in a section on democracy and citizenship, the Liberal Democrats promised to allow “non-profit local media outlets to obtain charitable status where the public interest is being served”. In academia, the idea has been explored in various roundtable meetings, papers and publications, including our own as part of a 2013-14 AHRC-funded project on media plurality and ownership.

As part of a comparative multi-authored report for the Reuters Institute and the Yale Information Society Project in 2016, one of us (Judith Townend) set out the ways in which news and journalism had been charitably funded in the UK within the constraints of the current system – through, for example, the creation of separate trusts that fund the charitable activity of a journalistic operation. Another report from the Cass Business School in 2017 also provided current examples of UK journalism benefiting from philanthropy.

But many of these models are clunky and overly restrictive and do not allow organisations to enjoy the full benefits of charitable status. Potential donors are likely to be discouraged. Many non-profits, which clearly contribute to a better informed citizenship and a more vibrant democracy, are either pushed back by the Charity Commission or are put off entirely by the laborious and costly application process. Full Fact– a clear example of democratic engagement in the public interest – took many years and several thwarted attempts to persuade the commission of its charitable merits. The Bureau of Investigative Journalism faced similar difficulties; although it has not secured status itself, a newly registered Trust will fund charitable elements of its work.

A more flexible approach, that recognises certain types of journalistic activity as fulfilling charitable objects and delivering public benefit, would allow organisations to undertake public interest journalism that is difficult to sustain commercially. But so far, there has been little public discussion or political support.

This could change with the establishment of the Cairncross Review, set up by Matt Hancock during his brief stint as culture secretary. Under the leadership of Dame Frances Cairncross, the review is designed to investigate “how to sustain the production and distribution of high-quality journalism in a changing market”.

While there are some concerns that the review might be hijacked by incumbent corporate interests seeking subsidies for their existing operations (fuelled by the existence of several “old school” newspaper hacks on the expert panel), there is also optimism that some genuinely creative and radical ideas might be forthcoming – including greater discretion for charitable recognition of certain kinds of journalism. Cairncross herself referred to the notion of “philanthropic support of some sort” when recently interviewed on the Media Show.

There are clearly issues to be resolved, not least who will assess whether charitably funded journalistic output genuinely meets charitable criteria and remains within legal constraints on political activity. The Charity Commission itself might find the prospect too politically sensitive, and the current press self-regulator IPSO is much too close to (and owned by) established press interests to be appropriate for the task. Communications regulator Ofcom is one obvious candidate.

But with some creative thinking and greater political will, it may at last be possible to open up some much-needed additional funding for journalism – which will not only enrich democracy but also provide outlets for talented journalists who increasingly struggle to find sustainable employment within traditional media outlets.

The post Reform charity law to allow funding of public interest journalism appeared first on Information Law & Policy Centre.

Carnegie UK Trust: A proposal for harm reduction in Social media – Lorna Woods

Tue, 02/10/2018 - 13:57

This blog post was written by Professor Lorna Woods and originally posted on Inforrm.

Concern about the possible harmful effects of social media can now be seen in civil society, politics and the justice system not just in the UK but around the world. 

The remarkable benefits of social media have become tainted by stories raising questions about its adverse effects: the fact that it can be used for bullying; that content on those platforms can seemingly be manipulated for political purposes or facilitate terrorism and extremism; the fact that the underpinning systems leak data, whether deliberately or inadvertently; whether the design of the services themselves are malign and concerns about the addictive nature of some of the services – for example herehere and here .

While some of these stories may be anecdotal, and the research on these issues still at early stages, the cumulative impact suggests that market forces and a self-regulatory approach are not producing an ideal outcome in many of these fields. Mark Zuckerberg in his evidence to the US Congress has said he welcomes regulation of the right sort and Jack Dorsey of Twitter has made a public plea for ideas.

Against that background, Will Perrin and I, under the aegis of the Carnegie UK Trust, decided to explore whether there were any regulatory models that could be adopted and adapted to encourage the providers of social media services to take better care of their users, whilst not stifling these companies’ innovation and respecting all users’ freedom of expression.  The following is an outline of what we came up with.

Further detail is available on the Carnegie UK Trust site, where we posted a series of blogs explaining our initial thinking, summarised in our evidence to the House of Lords inquiry into internet regulation. We propose to develop a fuller proposal and in the meantime welcome suggestions on how the proposal could be improved at comms@carnegieuk.org.

Existing Regulatory Models

Electronic communications systems and the mass media content available over them have long been subject to regulation.  These systems do not on the whole require prior licensing but notification and compliance with standards. While there were some potential points of interest for a social media regulatory model – e.g. the fact that telecoms operators have to provide subscribers with a complaints process (see General Condition 14 (GC14)) and the guidance given by Ofcom to content providers regarding the boundaries or acceptable and unacceptable (some of which is based on audience research) – overall these regimes did not seem appropriate for the context of social media.  One concern was that the standards with which the operator must comply were on the whole top-down.  Moreover, the regulator has the power to stop the operator from providing the service, stopping the business in that field altogether.  This suggests that these regimes still rely on implicit consent from the regulator as far as the business itself is concerned.

Was the transmission/content analogy the right one then for steering us in the direction of an appropriate regulatory model for social media? In our view, social media is not (just) about publishing; rather, it is much more similar to an on-line public or quasi-public space.  Public spaces in real life vary hugely-  in terms of who goes where, what they do and how they behave. However, in all of these spaces a common rule applies – that the owners or those that control that space are expected to ensure basic standards of safety, and the need for measures and the type of measures needed are, to some extent, context specific.

Lawrence Lessig, in Code and Other Laws of Cyberspace (1999), famously pointed out that the software sets the conditions on which the Internet (and all computers) is used – it is the architecture of cyberspace.  Software (in conjunction with other factors) affects what people do online: it permits, facilitates and sometime prohibits. It is becoming increasingly apparent that it also nudges us towards certain behaviour. It also sets the relationships between the users and the service providers, particularly in relation to personal data use. So, social media operators could be asked when drafting their terms and conditions, writing their code and establishing their business systems to have user safety in mind.

If we adopt this analogy, a couple of regimes seem likely models on which regulation of social media could be based: Occupiers’ Liability Act 1957 ; Health and Safety at Work Act ; Environmental Protection Act 1990, which all establish a duty of care.  The idea of duty of care derives from the tort of negligence; statutory duties of care were established in contexts where the common law doctrine seemed insufficient (which we think would be the case in the majority of cases in relation to social media due, in part, to the jurisprudential approach to non-physical injury). Arguably the most widely applied statutory duty of care in the UK is the Health and Safety at work Act 1974 which applies to almost all employers and the myriad activities that go on in them. The regime does not set down specific detailed rules with regards to what must be done in each workplace but rather sets out some general duties that employers have both as regards their employees and the general public.  So s. 2(1) specifies:

It shall be the duty of every employer to ensure, so far as is reasonably practicable, the health, safety and welfare at work of all his employees.

The next sub-section then elaborates on particular routes by which that duty of care might be achieved: e.g provision of machinery that is safe; the training of relevant individuals; and the maintenance of a safe working environment. The Act also imposes reciprocal duties on the employees. While the Health and Safety at Work Act sets goals, it leaves employers free to determine what measures to take based on risk assessment.

The area is subject to the oversight of the Health and Safety Executive, whose functions are set down in the Act.  It may carry out investigations into incidents; it has the power to approve codes of conduct. It also has enforcement responsibilities and may serve “improvement notices” as well as “prohibition notices”.  As a last measure, the HSE may prosecute.  There are sentencing guidelines which identify factors that influence the heaviness of the penalty.  Matters that tend towards high penalties include flagrant disregard of the law, failing to adopt measures that are recognised standards, failing to respond to concerns, or to change/review systems following a prior incident as well as serious or systematic failure within the organisation to address risk.

In terms of regimes focussing on risk, we also noted that risk assessment lies at the heart of the General Data Protection Regulation regime (as implemented by the Data Protection Act 2018). Beyond this risk based approach – which could allow the operators to take account of the types of service they offer as well as the nature of their respective audiences – there are many similarities between the risk-focused regimes. Notably they operate at the level of the systems in place rather than on particular incidents.

Looking beyond health and safety to other regulators – specifically those in the communications sector – a common  element can be seen.  That is that changes in policy take place in a transparent manner and after consultation with a range of stakeholders.   Further,  all have some form of oversight and enforcement – including criminal penalties- and the regulators responsible are independent from both Parliament and industry. Breach of statutory duty may also lead to civil action.  These matters of standards and of redress are not left purely to the industry.

Implementing a Duty of Care

We propose that a new duty of care be imposed on social media platforms by statute, and that the statute should also set down the particular general harms against which preventative measures should be taken. This does not mean, of course, that a perfect record is required– the question is whether sufficient care has been taken.  Our proposal is that the regulator is tasked with ensuring that social media services providers have adequate systems in place to reduce harm. The regulator would not get involved in individual items of speech unless there was reasonable suspicion that a defective company system lay behind them.

We suggest that the regime apply to social media services used in the UK that have the following characteristics:

  1. Have a strong two-way or multiway communications component;
  2. Display and organise user generated content publicly or to a large member/user audience;
  3. A significant number of users or audience – more than, say, 1,000,000;
  4. Are not subject to a detailed existing regulatory regime, such as the traditional media

Given that there are some groups that we might want to see protected no matter what, another way to approach the de minimis point in (c) would be to remove the limit but to say that regulation should be proportionate also to the size of the operator as well as the risks the system presents. This still risks diluting standards in key areas (e.g. a micro business aimed at children – as the NSPCC have pointed out to us in the physical world child protection policies apply to even the smallest nurseries). A further different approach could be to identify core risks which all operators must take into account, but that bigger/more established companies must address a fuller range of risks.

The regulator would make the final determination as to which providers fell within the regime’s ambit, though we would envisage a registration requirement.

Our proposals envisage the introduction of a harm reduction cycle.  A harm reduction cycle begins with measurement of harms. The regulator would draw up after consultation with civil society and industry a template for measuring harms, covering scope, quantity and impact. The regulator would use as a minimum the harms set out in statute but, where appropriate, include other harms revealed by research, advocacy from civil society, the qualifying social media service providers etc. The regulator would then consult publicly on this template, specifically including the qualifying social media service providers. The qualifying social media service providers would then run a measurement of harm based on that template, making reasonable adjustments to adapt it to the circumstances of each service.

The regulator would have powers in law to require the qualifying companies (see enforcement below) to comply. The companies would be required to publish the survey results in a timely manner. This would establish a first baseline of harm.  The companies would then be required to act to reduce these harms, submitting a plan to the regulator which would be open to public comment.  Harms would be measured again after a sufficient time has passed for harm reduction measures to have taken effect, repeating the initial process. Depending on whether matters have improved or not, the social media service provider would have to revise its plan, and the measurement cycle begins again.  Well-run social media services would quickly settle down to a much lower level of harm and shift to less risky service designs. This cycle of harm measurement and reduction would continue to be repeated, as in any risk management process participants would have to maintain constant vigilance.

We do not envisage the harm reduction processes to necessarily involve take-down processes.  Moreover, we do not envisage that a system that relied purely on user notification of problematic content or behaviour and after the event responses would be taking sufficient steps.  Tools/techniques that could be developed and deployed include:

  • the development of a statement of risks of harm, prominently displayed to all users when the regime is introduced and thereafter to new users; and when launching new services or features;
  • an internal review system for risk assessment of new services prior to their deployment (so that the risk is addressed prior to launch or very risky services do not get launched);
  • the provision of a child protection and parental control approach, including age verification, (subject to the regulator’s approval/ adherence with industry standards);
  • the display of a rating of harm agreed with the regulator on the most prominent screen seen by users;
  • development – in conjunction with the regulator and civil society – of model standards of care in high risk areas such as suicide, self-harm, anorexia, hate crime etc; and
  • provision of adequate complaints handling systems with independently assessed customer satisfaction targets and also produce a twice yearly report on the breakdown of complaints (subject, satisfaction, numbers, handled by humans, handled in automated method etc.) to a standard set by the regulator.

It is central that there be a complaints handling system to cover concerns about content/behaviour of other users.  While an internal redress system that is fast, clear and transparent is important, we also propose that an external review mechanism be made available.  There are a number of routes which require further consideration – one route might be an ombudsman service, commonly used with utility companies although not with great citizen satisfaction, another might be a binding arbitration process or possibly both.

Finally, the regime must have sanctions.  The range of mechanisms available within the Health and Safety regime are interesting because they allow the regulator to try improve conditions rather than just punish the operator,  (and to some extent the GDPR has a similar approach). We would propose a similar range of notices.  For those that will not comply, the regulator should be empowered to impose fines (perhaps GDPR magnitude fines if necessary).

The more difficult questions relate to what to do in extreme cases. Should there be a power to send a social media services company director to prison or to turn off the service? Regulation of health and safety in the UK allows the regulator in extreme circumstances, which often involve a death or repeated, persistent breaches to seek a custodial sentence for a director. The Digital Economy Act contains power (Section 23) for the age verification regulator to issue a notice to internet service providers to block a website in the UK. Should there be equivalent  powers to send a social media services company director to prison or to turn off the service?  In the USA the new FOSTA-SESTA package apparently provides for criminal penalties (including we think arrest) for internet companies that facilitate sex trafficking.  Given the impact on freedom of expression, these sorts of penalties should be imposed only in the most extreme cases – the question is, should they be there at all?

Professor Lorna Woods is Chair of Internet Law, School of Law, University of Essex and the joint author of the Carnegie UK Trust proposals with William Perrin

The post Carnegie UK Trust: A proposal for harm reduction in Social media – Lorna Woods appeared first on Information Law & Policy Centre.

Fixing Copyright Reform: How to Address Online Infringement and Bridge the Value Gap

Thu, 06/09/2018 - 10:24

In the following piece, Christina Angelopoulos (Lecturer in Intellectual Property Law at the University of Cambridge and Associate Research Fellow at the Information Law & Policy Centre) and João-Pedro Quintais (Postdoctoral Researcher and Lecturer, Institute for Information Law (IViR), University of Amsterdam) consider how to improve EU copyright reform to address online copyright infringement. The post was originally published on the Kluwer Copyright Blog.

1. Introduction

In September 2016, the European Commission published its proposal for a new Directive on Copyright in the Digital Single Market, including its controversial draft Article 13. The main driver behind this provision is what has become known as the ‘value gap’, i.e. the alleged mismatch between the value that online sharing platforms extract from creative content and the revenue returned to the copyright-holders. Yetas many commentators have argued, the obligations introduced by the proposed text are incompatible with existing EU directives, as well as with the EU Charter of Fundamental Rights, as interpreted by the CJEU. It thereby risks creating more legal uncertainty than it resolves.

We suggest that the proposal additionally suffers from a more fundamental shortcoming: it misconceives the real problem afflicting EU copyright law, i.e. the proliferation of copyright infringement online in general, not only through Web 2.0 hosts. This problem is compounded by an increasingly outdated EU copyright framework: currently, this allows infringing end-users to hide behind their online anonymity, while failing to provide any mechanism for the compensation or remuneration of right-holders for the infringements these users commit. Faced with this impasse, right-holders have shifted their focus to internet intermediaries. Yet, while the CJEU’s recent case law has waded into the tricky area of intermediary liability, no complete system of rules determining what obligations intermediaries have to prevent or remove online copyright infringement currently exists at the EU level.

Absent a more stable legal basis, targeted superstructure initiatives such as the current proposal are set up for failure. If EU copyright law is to be reformed, it is on these crucial weak spots that proposals should focus. To address them, we suggest an alternative approach that better tackles the problem of the unauthorised use of protected content over digital networks. Our proposal is two-pronged, consisting of: a) the introduction of a harmonised EU framework for accessory copyright liability; and b) the adoption of an alternative compensation system for right-holders covering non-commercial direct copyright use by end-users of certain online platforms. As we explain below, this solution avoids the difficulties encountered by the current reform proposal, while successfully targeting the copyright framework’s real failings.

2. A Better Way Forward: A Two-pronged Approach to Online Infringement

2.1. Harmonisation of Accessory Copyright Liability

One of the most prominent gaps in EU copyright law is the lack of a harmonised regime for accessory liability. In view of the ubiquity of intermediation in internet-based communications, this fragmentation is particularly problematic for online infringement. Introducing a harmonised solution would thus enable addressing such infringement, helping to resolve the ‘value gap’ controversy.

The big question, of course, is how to shape such a harmonised accessory liability framework. Helpfully, the case law of the CJEU has indicated the way forward. Instead of reinventing the wheel, we suggest that the EU legislator should take its cue from that case law.

In its recent decisions on communication to the public, the CJEU has emphasised the need both for an ‘act of communication’ and for that act to be done with some level of knowledge. Following this lead, a future EU intermediary liability copyright regime would have to comprise what we have termed a ‘conduct element’ and a ‘mental element’. While the first would focus on whether the defendant’s behaviour has contributed to an infringement, the second would consider their mindset. Where at least one of the elements is absent, the defendant should be absolved from liability. If both are satisfied, either the defendant should immediately be held liable for the infringement or they should be placed under an obligation to take appropriate action.

●      The Conduct Element of Accessory Liability

The CJEU’s decisions on the notion of an ‘act of communication’ are helpful in indicating the appropriate threshold for the conduct element. In recent years, the Court has taken an expansive approach, focusing on what it has termed ‘interventions to give access’. While initially it required that such interventions be ‘indispensable’ for the dissemination of the work to third parties, eventually, in Ziggoit broadened the notion to include any intervention without which the public would be able to enjoy the work ‘only with difficulty.’ This is in contrast to Recital 23 of the InfoSoc Directive, according to which the right of communication to the public should not cover any acts other than ‘transmissions or retransmissions’.

Following the CJEU’s model, we suggest that the ‘conduct element’ should incorporate any non-minimal participation in the copyright infringement of another party. All that is required is that, without the defendant’s involvement, ‘in principle’ infringing would be ‘more complex’. We consider that this permissive approach is appropriate. In our view, it is the defendant’s state of mind that should determine which conduct elements give rise to accessory liability.

●      The Mental Element of Accessory Liability

In addition to ‘acts of communication’, the case law of the CJEU has emphasised that a mental element must also be present. So far, the mental element has only been mentioned by the Court in accessory liability cases. It is reasonable to assume this limitation will be maintained in future, preserving the ‘strict’ nature of primary copyright liability. The result would be a divide between classic ‘transmission’ cases (governed by Recital 23), for which no mental element is necessary, and ‘intervention short of transmission’ cases, where a mental element would be required.

Two main types of mental element exist: intent and knowledgeGS MediaFilmspeler and Ziggo all indicate that the lower standard of knowledge of the primary infringement should suffice.

Further than this, we suggest that both specific and general knowledge should be accepted. Historically, national European courts have tended to opt for the stricter ‘specific’ approach; however, with the rise of modern technologies, a relaxing of the standard towards ‘general’ knowledge is suitable. Notably, in Ziggo, general knowledge that the defendant’s services were used to provide access to works published without authorisation from the right-holders was deemed sufficient by the CJEU.

Similarly, given the oft-referenced objective of EU copyright law to provide right-holders with a ‘high level of protection’, our proposal suggests that both actual and constructive knowledge should be acceptable. After all, GS Media imposed liability where the provider of a hyperlink ‘knew or ought to have known that the hyperlink he posted provides access to a work illegally placed on the internet’. At the same time, the wording ‘ought to have known’ carries real meaning: the accessory cannot be expected to go to unreasonable lengths to uncover infringements. Most importantly, the general monitoring prohibition of Article 15 of the E-Commerce Directive (ECD) must be respected, as must the limits set by the Charter of Fundamental Rights of the EU. An accessory cannot be said to have constructive knowledge where that knowledge could only be acquired through monitoring its entire platform.

●       The Violation of a Reasonable Duty of Care

The scope of the mental element should affect the consequences for the defendant where both the conduct and mental elements are present. The CJEU is again helpful here: while in GS Media, where only knowledge was considered, the Court indicated that a notice-and-takedown framework might apply to hyperlink providers, in Filmspeler and Ziggo, where there were indications of intention, this option was not discussed.

We propose that this approach be further pursued in EU accessory copyright liability. A sensible framework would require that, if an accessory intended an infringement, its behaviour should be by definition unacceptable. Liability should therefore ensue. On the other hand, if the intermediary only has knowledge of the infringement, the violation of a duty of care must first be established.

The type of duty should depend on the type of knowledge. So, as in GS Media, if the intermediary has specific knowledge of an infringement it has through its conduct supported, it will be reasonable to expect that it remove it. Depending on the circumstances, other measures (including preventive ones) might also be appropriate, e.g. the suspension of repeat infringers, notifying the authorities or the provision of identifying data on the user to the authorities. On the other hand, if the intermediary has only general knowledge of mass infringements using its systems, the removal of content would require unacceptable general monitoring. As a result, other measures must be considered. The posting of warning duties is an obvious candidate.

Where the accessory fails to take the measures due by it, assuming it had the ability to take those measures or at least ought to have ensured that it had that ability, it should be held liable. If the intermediary did not take the appropriate measures that would have been expected of it on a flagrantly persistent basis, intent may also arguably be inferred.

Undoubtedly, this system alone would not give right-holders the tools to eliminate online copyright infringement. An EU regime for accessory copyright liability can only offer part of the answer. If no liability can reasonably be imposed on the intermediary, attention should then shift to the primary liability of infringing end-users.

2.2. An Alternative Compensation System for Content-sharing Platforms

By itself, the harmonisation of intermediary liability is not a complete solution to the problem of online infringement. As such, more novel approaches are necessary. Building on precedents with a long tradition in copyright law, such as continental European private copying schemes, one such possibility is the adoption of a system that replaces direct authorisation of certain types of online activities with a scheme for licensing such use and ensuring remuneration to right-holders. In the current ‘value gap’ debate, a number of authors and policy makers have been calling for similar solutions as a supplement to the harmonisation of certain aspects of intermediary liability (see e.g. hereherehere, and here). Our proposed ‘alternative compensation system’ goes in a similar direction.

  • Statutory License and Mandatory Exception

The system we envisage involves a statutory license based on a mandatory exception for individual online users that covers the non-commercial use of works on user-generated content platforms (‘content-sharing platforms’). The exception directly covers and authorizes acts by individual natural persons who are end-users of such platforms. It would enable users that meet its conditions to freely upload and share content with legal certainty, without the risk of filtering or removal. Right-holders (and especially creators) would benefit from a clearer set of legal rules and, as explained below, an additional stream of rights revenue. The exception would also indirectly benefit certain content-sharing platforms, as it heightens the threshold required for a finding of knowledge or intent of infringement by the platform as regards uses outside the exception.

Compared to the value gap proposal, our system would also increase legal certainty for platforms by clarifying their liability for acts of their users, while preventing the extension of the exclusive right to their normal activity of encouraging/supporting online user creativity. For exempted acts of end-users, platforms would be allowed breathing space to provide their services and would not be subject to injunctions under Article 8(3) InfoSoc Directive. Finally, due to the privilege granted to users, our proposal would discourage preventive filtering of protected content by design. At the same time, there would remain ample space for reactive duties of care, such as notice-and-takedown obligations, to be imposed on platforms, upon obtaining knowledge of infringements regarding content or uses outside the exception’s scope.

  • Scope: Subject Matter and Substantive Rights

In theory, the system could apply to all types of protected works and other subject matter, domestic or foreign to an EU Member State, that is susceptible to upload and use on a content-sharing platform. In practice, for reasons of compliance with the three-step test, some subject matter exclusions might be sensible, namely for computer programs, databases and videogames. These would be justified by the idiosyncratic legal nature and market logic of these categories of works, as well as by the fact that they have largely remained outside the scope of statutory licensing and collective licensing, on which our system relies.

The exception would cover non-commercial online acts of reproduction and communication to the public by users of content-sharing platforms, under Articles 2 and 3 InfoSoc Directive. It would also apply to transformative uses (e.g. certain types of remixes or mashups), including those that lie in the grey area between reproduction and adaptation.

Only non-commercial use would be covered. This concept features in different provisions in the acquis and is central to the JURI version of Article 13 (Amend. 77). We argue that it is better understood as a legal standard (as opposed to a rule) and an autonomous concept of EU law. It should apply to the use of works by individuals that is not in direct competition with use by the copyright-holders. To determine the standard, recourse could be had to criteria that are both subjective, like the profit-making purpose of the user, and objective, such as the commercial character of the use. In the context of content-sharing platforms, where most individual users do not carry out a business activity or make profit from the platform, the application of such a standard should, as a rule, be straightforward.

The distinction between commercial and non-commercial uses could be clarified through recitals supporting the exception, listing positive and negative examples (e.g. excluding uploads to peer-to-peer platforms, as proposed by Hilty and Bauer). It could be further clarified that non-commercial use focuses on online activities by users (not platforms) for consumption, enjoyment, reference, or expression, outside of the context of a market or business activity, and excludes acts with a direct profit intention or acts for which payment is received. Grey area cases will be decided by national courts, as well as ultimately by the CJEU in the interpretation of this autonomous concept.

Lastly, only works that are freely available online (being either uploaded from an authorised source or covered by an exception or limitation) should benefit from the exception. This requirement provides a clear legal basis for right-holders to notify platforms that are otherwise (prior to this knowledge-making notification) not accessorily liable, so that they may remove or disable access to the infringing copy.

  • Fair Compensation

Our proposal relies on an exception tied to an unwaivable right of fair compensation that vests solely in the authors and holders of related rights affected thereby, i.e. those listed in Articles 2 and 3 of the InfoSoc Directive. This right of fair compensation ensures that: a) creators receive a fair share of the amounts collected under the statutory licence system (which we propose to be at least 50% of collected rights revenue); and b) they are not forced to transfer that share to publishers and other derivative right-holders in the context of unbalanced contractual negotiations.

The amount of compensation should reflect the harm suffered by right-holders. In the absence of an actual market to determine the price for non-commercial uses, this can be calculated by measuring users’ willingness to pay for such a system through methods of contingent valuation. The calculation should also take into consideration mitigating factors already recognized in the acquis, adjusted to the context of content-sharing platforms: the de minimis nature of a use, prior payments for such use, and the application of technological protection measures. If a type of use does not cause economic harm to right-holders, it should not give rise to an obligation to pay fair compensation.

  • Payment Obligations and Safeguards for Platforms

The obligation to pay compensation would lie with content-sharing platform providers whose users benefit from the exception. Like with existing levy-systems, platforms would have the option of either shifting the burden of the compensation to users (e.g. as a subscription fee) or absorbing part of that cost, e.g. by financing it out of advertising revenue. The platforms payment obligation should be counterbalanced by safeguards. Importantly, the alternative compensation system should operate harmoniously with the accessory liability framework set out above. Thus, the new regime should clarify and strengthen the prohibition on the imposition of general monitoring obligations of Article 15 ECD. Platforms should only be subject to obligations to take action against infringing content where: a) it can be shown that they intend to cause infringement or b) after obtaining knowledge of a copy of a work being uploaded in contravention of the exception.

3. Conclusion

There are no perfect solutions to the challenge of online infringement. Any new proposal will have to be built on top of a fragmented and highly complex EU legal framework. Its benefits and drawbacks should therefore be measured not against an ideal system, but rather compared to the current ‘value gap’ proposal and its potential impact on the acquis. A pragmatic approach is thus appropriate. We propose the parallel implementation of two legal mechanisms: one geared at improving the EU law on intermediary copyright liability and the second directed at providing compensation to right-holders for at least some online infringement. Our solution, like most levy-based systems, undoubtedly represents a ‘rough justice’ response to a real-world problem. Nevertheless, it could contribute to achieving the ‘fair balance’ between the rights and interests of right-holders and users that the CJEU places at the heart of EU copyright law. The joint operation of the two proposed mechanisms would increase legal certainty for all stakeholders, enable the development of the information society, and provide fair compensation for right-holders for uses of their works in the online environment.

——————————————————————————————————————————–

Note: This proposal builds upon the authors’ pre-existing research into the respective areas of intermediary liability and alternative compensation systems. A more detailed version will be published in an upcoming academic article (on file with the authors). For further information, see: C Angelopoulos, European Intermediary Liability in Copyright: A Tort-Based Analysis (Kluwer Law International 2016) and JP Quintais, Copyright in the Age of Online Access: Alternative Compensation Systems in EU Law (Kluwer Law International 2017).

Richard Danbury: Cliff Richard and Private Investigations

Fri, 27/07/2018 - 11:06

This article was written by Richard Danbury and originally published on UKCLA.

There is an old joke, in which a man is driving through the countryside, lost. He stops his car in a small village to ask a local for directions. The local responds by saying: ‘you want to get where? Oh, to get there, I wouldn’t start from here.’

It’s a joke my children wouldn’t get, from another era, from an age before satnav and Google maps. Perhaps it should be retired. But it remains of contemporary relevance at least as a way of understanding the recent judgment of Richard v BBC. This is because it highlights the issue of framing: the way one perceives an issue dictates, to some extent, the way one attempts to deal with it. Framing is well known in journalism, as the way a journalist perceives an event – frames it – influences the way she will report on it. It also can be helpful in law. The way an advocate persuades a tribunal to perceive an event – frames it – dictates, to some extent, the conclusion the tribunal will reach. Every advocate knows that to get to a particular destination, it’s important to get the judge or jury to start from the right place.

Reading the 454 paragraphs of Mann J’s clear prose in Richard v BBC, we are left with little doubt how he framed the case. A well-beloved celebrity, Sir Cliff Richard, was unfairly accused of a horrendous crime, and was investigated, as was only right, by the police. But the police told the BBC this private information, which they shouldn’t have done, because they were pressurised into doing so by the BBC. The BBC prepared a report, dispatched a helicopter to shoot video through Sir Cliff’s windows of policemen searching his flat, and then published this to the world. This harmed Sir Cliff, who sued the police for informing the BBC, and the BBC for informing the world. Justice was done to Sir Cliff when Mann J resolved the dilemmas with which he was presented in favour of Sir Cliff.

Indeed, Mann J seems to have resolved all the dilemmas with which he was faced in favour of Sir Cliff. Many of these findings might be challenged, and some are supported by stronger reasoning than others. The BBC has indicated that it is considering appealing. This blog concentrates on one finding that can be challenged, as it is one that potentially has the most impact on public interest journalism. This is Mann J’s conclusion in paragraph 248 that a person under police investigation has a prima faciereasonable expectation of privacy in respect of that fact. The blog argues that, while an understandable conclusion given Mann J’s framing of Sir Cliff’s case, this finding erects a significant and substantial hurdle for those undertaking public interest journalism. That is a problem.

The notion of framing is useful in explaining why, for there are other ways of framing the dilemma with which the learned judge was presented. One is epitomised by the Jimmy Savile scandal. (Indeed, to be fair, this was recognised by the judge at paragraph 281). Savile, as is well known, was a serial sexual abuser, who hid in plain sight behind his fame. One thing that made it possible for him to hide in this way was the law of defamation, for it was the law of defamation that chilled investigations into his conduct by journalists such as Meirion Jones. Yet such investigations, and the coruscating sunlight of publicity they can bring with them, would likely have stopped Savile from assaulting and abusing other victims.

Is it too much to say, then, that, but for the stringency of British defamation law, fewer people would have been sexually assaulted by Savile? Perhaps so. This is because there are so many other features of Savile’s abuse beyond its legal context and the effect of such law on journalists. But it is at least one way of framing the question faced by Mann J as to whether the law of privacy should be made more stringent. Indeed, when he found that all those under investigation by the police have a prima facie reasonable expectation of privacy, removing the ambiguity that had until this point been present in the law, this is what Mann J did. The problem is that such a finding will make the public interest investigative journalism into people like Savile – and indeed, into the police – harder. It is likely that, as a result, there will be fewer of them.

This observation leads to another way of framing Richard v BBC, which is also helpful in seeing why Mann J’s finding is a problem. Consider media law in the UK as a whole. Many recent cases have inhibited public interest investigative journalism. For example the Supreme Court in Ingenious Media v HMRC narrowed the availability of the ‘defence’ of public interest in actions for Breach of Confidence, and the Court of Appeal in Lachaux v Independent Print Ltd interpreted in a restrictive way (from the point of view of defendants) the phrase ‘serious harm’ in section 1 of the Defamation Act 2013. This made it easier for some defendants to issue claims in defamation. (The case is currently also on appeal to the Supreme Court.) One can also look at the reported increased use of data protection as a cause of action for claimants. When considered individually, framed by claimant advocates in each case, these decisions are eminently justifiable. Yet together, changes in media law have made the work of public interest investigative journalism harder. This difficulty has been bolstered by Richard v BBC.

This, by itself, is no criticism of Mann J’s judgment. It is, as is well known, how the Common Law works. Individual judges make decisions on the case with which they are presented: they weave a particular thread. As a matter of principle, they are seldom asked to rule on how appropriately that thread fits of the fabric of the law, seen as a whole, or on the law itself. After all, very few cases will require a judge to consider how the issue being litigated interacts with other elements of the law that operate on a defendant. Such is the job of the legislature, or the Law Commission.

But there are also particular concerns with the route by which Mann J came to his conclusion. These can be found in paragraphs 231 to 248, the basis for the decision on the privacy right of those under investigation. It is here that the learned judge surveyed relevant case law, and took notice of extra-judicial sources, not least the Leveson report. In doing so, as can be expected, he articulated both the doctrinal and policy reasons for concluding that the subject of an investigation had a reasonable expectation of privacy. But what he failed to do sufficiently is discuss the policy reasons and arguments againstthere being a reasonable expectation of privacy. These policy arguments can be supplemented by the doctrine of open justice. This is because the position of an individual being investigated by the police is comparable in relevant ways to that of an individual being tried in a court. In court, an individual may be said to have an expectation of privacy, but it is not likely to be prima facie reasonable, and the same can be said to someone under investigation. As he omitted to consider these, the judge’s reasoning does not support his conclusion.

It is not completely unbalanced. To be fair to Mann J, some of the policy arguments why an expectation of privacy may not be reasonable are indeed canvassed in his judgment. One is what Mann J calls ‘shaking the tree’, namely the idea that publicity will encourage other complainants or witnesses to come forward. (This is not an argument that was advanced on behalf of the BBC – see paragraph  252.) But this is not a strong argument against the existence of a reasonable expectation of privacy. It is, rather, an argument about why privacy in such situations should not overcome freedom of expression, when the two rights are balanced. More pertinent are other arguments. Three can be presented here.

The first is the classic, Benthamite justification for Open Justice, namely that it keeps the judges, while judging, under trial, and so inhibits misfeasance by those in authority. If this is applicable to judges, surely it is even more applicable to police when they investigate. It provides a rationale why an individual’s expectation of privacy in relation to an investigation may be prima facie unreasonable: the police need policing, and journalistic investigations can assist in this. Secondly, there is the concern that where the actions of a state are obscured and un-reportable, it incurs suspicion in the public’s minds. As Milton observed ‘forbidd’n writing is thought to be a certain spark of truth that flies up in the faces of them who seeke to tread it out’. This, more recently, has been part of Munby J’s arguments as to why the operation of the Family Courts should be subject to more publicity. The fear in relation to police investigations is that if journalists cannot report on those being investigated by the police, that suspicion and conspiracy theories will tend to flourish about what the police are up to, and this may harm public trust in the system.

The third argument against there being a prima facie reasonable expectation of privacy is that this impedes the public being informed about an important political fact of which they ought to be informed. The operation of the criminal justice system is a central aspect of a liberal democracy, and a key subject on which people form political opinions, influencing their decision on how to vote. It is clearly important that the public are informed about police investigations. None of these arguments were canvassed by Mann J in his judgment, and surely this is a flaw. If they had been, they should have supported the view that the law ought not to recognise a prima facie right to privacy in respect of a police investigation.

This is not to say that there never can be a right to privacy in an investigation. But what should happen is that whether privacy is engaged in an investigation should be a question of fact to be determined in each case, rather than a presumption of law to be applied in all. Such a resolution is acceptable in a broader range of the ways cases such as Richard v BBC can be framed. It is also less likely to chill public interest investigative journalism.

Dr Richard Danbury, Associate Professor, De Montfort University, and Associate Research Fellow, Institute for Advanced Legal Studies

(Suggested citation: R. Danbury, ‘Cliff Richard and Private Investigations’, U.K. Const. L. Blog (25th Jul. 2018) (available at https://ukconstitutionallaw.org/))

Bethany Shiner: How Does the Data Protection Act 2018 Empower the Information Commissioner to Tackle the Misuse of Personal Data in Political Campaigns?

Tue, 24/07/2018 - 16:34

This article was written by Bethany Shiner and originally published on UKCLA.

Introduction

Following on from an earlier piece on this blog which highlighted some of the gaps in the legal framework relating to the use of personal data for political purposes in campaign material, this will consider whether the Data Protection Act 2018, which implements the General Data Protection Regulations (GDPR), provides the regulator with enough investigatory and enforcement powers to better tackle the misuse of data in political campaign practices in future.

The Information Commissioner’s Office

The Information Commissioner’s Office (ICO) published an update on its ongoing investigation into the use of data analytics in political campaigns indicating some preliminary findings of breaches of one or more of the data protection principles as well as some of the enforcement actions it has taken. This update, along with its Democracy Disrupted? report, will feed into the Digital Culture Media and Sport Select Committee’s Fake News inquiry, which is preparing its interim report. The ICO investigation needs to establish whether there had been breaches of the Data Protection Act 1998 and the Privacy and Electronic Communications Regulations 2003 and to do that it had to examine how political campaigns use personal data to micro-target voters with political adverts and messages.

Eleven political parties were served with warning letters and assessment notices for audits and the ICO has concluded that there are ongoing risks and concerns arising from the purchasing of marketing lists and lifestyle information from data brokers without sufficient due diligence, a lack of fair processing, the use of third party data analytics companies with insufficient checks around consent, and the provision of members contacts lists to social media companies. During this investigation the ICO has used the powers available to it under the Data Protection Act 1998 and the 2018 Act, which came into effect in May 2018, including issuing formal notices for information, powers of entry under warrant, and audit and inspection powers.

The Data Protection Act 2018

If there were to be another snap-election or a second UK-EU referendum, and the same practices were alleged, what would the ICO be able to do differently now that a new legislative framework is in place? There are two classes of people found to be engaged in the (mis)use of personal data in politics: 1. elected representatives, political candidates and political parties, and 2. companies including data brokers, political consultancy firms, and social media platforms. As demonstrated by the investigation into the UK-EU referendum, breaches can flow between both classes.

Elected representatives, political parties and candidates are empowered to access and use certain information (for example, the electoral register) to communicate with the electorate and members of the local constituency and respond to inquiries arising from constituency surgeries. Access to such data was established under the 1998 Act and have been carried over to the 2018 Act by sections 22 and 23 of Schedule 1.  Section 8 of the 2018 Act provides a lawful basis for processing personal data founded on public interest for an ‘activity that supports or promotes democratic engagement’ such as communicating with electors, campaigning activities, and opinion gathering inside and outside election periods. Although the section 8 exemption will still be subject to the six key principles established by the GDPR, including lawful, fair and transparent processing, this was an unnecessary additional exception  because the article 6 GDPR consent or legitimate interests legal bases are more appropriate justifications for processing personal data. The legitimate interest basis enables a balancing test between whether the legitimate interests are overridden by the interests or fundamental rights and freedoms of the data subject. This test ensures that organisations do not use a broad legal basis to legitimise micro-targeting and the other campaigning techniques the ICO was investigating at the time the amendment was inserted into the Bill. How this broad exemption will apply to the future use of data in political campaigning may rely on its interpretation, but the explanatory notes offer little guidance.

The 2018 Act provides some powers which were not available under the 1998 Act. It was perceived that the limitations of the previous regime were played out when the Information Commissioner had to wait five days for a warrant to enter the premises of Cambridge Analytica to seize evidence of data breaches. In actual fact, the ICO had already been in negotiation with the company for almost a month after it put the company on notice of its intention to demand access to the premises before finally applying for a warrant. However, this saga did enable the Commissioner to draw attention to her requests for greater investigatory and enforcement powers to be written into the Data Protection Bill which was being debated at the same time.

The GDPR does incorporate preventative mechanisms, but to investigate and enforce the law, the ICO needs to be able to move at pace in response to allegations such as the ones currently under investigation. Because the particulars of such potential data breaches are hard to detect on social media platforms and other online sources it is critical that the ICO has access to servers and other evidence to trace where data has come from, how the data was used and who it was shared with. Information notices facilitate the acquisition of the information the ICO needs to assess whether the law has been broken. In the ongoing investigation, 23 information notices have been served on 17 organisations (information notices can now be issued to individuals – such as ex-employees of companies – data processors, as well as data controllers) and have been a key tool in the investigation. For example, Facebook was asked about how its platform was used to mine data.

Now, the ICO can issue urgent information notices that have to be complied with in 24 hours. Further, following the Commissioner’s request, it can apply for a court order to compel compliance with information notices so that failure to comply is not solely penalised with a fine. The ICO was concerned that in the absence of compulsion, fines alone would encourage organisations and individuals to simply buy themselves out of data breaches by refusing to reveal the evidence or information relating to such breaches.

The Assessment notice provisions within the 2018 Act enable the ICO to complete urgent inspections to assess compliance with the data protection legislation. Section 148 creates a criminal offence for an organisation to destroy, dispose, block, conceal, alter or falsify information or documents the ICO intends to pursue a warrant to remove. This offence is meant to act as a deterrent and attracts a summary conviction. Section 4(1) of schedule 15 repeats the provision contained in schedule 9 of the 1998 Act for warrants to be sought more quickly if giving the owners of the premises the standard seven days’ notice would defeat the object of entry, or the Commissioner requires access to the premises in question urgently.

Enforcement notices can be issued to stop the processing of data, a power that existed under section 40(1) of the 1998 Act but there are more grounds for an enforcement notice as per section 149 of the 2018 Act. An enforcement notice was served on Aggregate IQ requiring it to ‘cease processing any personal data of UK or EU citizens obtained from UK political organisations or otherwise for the purposes of data analytics, political campaigning, or any other advertising’ within 30 days. The grounds were that contrary to articles 5(1)(a)-(c) and 6 of the GDPR the processing of personal data was in a way data subjects were not aware of, for purposes they would not have expected, without a lawful basis and the processing was incompatible with the purpose of the original data collection. The enforcement notice can be appealed, by virtue of section 162(1)(c), and if appealed, in all but exceptional circumstances, the notice need not be complied with pending determination or withdrawal of that appeal.  Failure to comply with the enforcement notice is not a criminal offence but does attract a fine of 4% GPR or £17million. How the ICO can enforce this when Aggregate IQ has no base in the UK or any other EU member state remains to be seen.

Conclusion

The ICO has had to conceive itself as a regulator of the democratic process. Elizabeth Denning is cognizant of concerns that our democracy may be under threat and has called for an ‘ethical pause’ (not a regulatory halt) on the use of data in politics to allow relevant parties to ‘reflect’ on their responsibilities in the era of big data ‘before there is a greater expansion in the use of new technologies’. Of course, parliamentarians are captured by these practices themselves often seeking the most effective ways to direct their messages to selected members of the electorate.

The ICO has embraced its developing role in protecting the electorate from data misuse but this is a role that has emerged with the use of campaign techniques that rely on information about the electorate obtained from the analysis of personal data. The ICO’s investigation into the official referendum campaigns is particularly significant as both campaigns were led by senior ministers. Does the ICO possess enough power to hold those that seek to gain from the misuse of data to account? This is being tested now and the final report on the ongoing investigation is due in October 2018.

Bethany Shiner, Lecturer at Middlesex University and solicitor-advocate

(Suggested citation: B. Shiner, ‘How Does the Data Protection Act 2018 Empower the Information Commissioner to Tackle the Misuse of Personal Data in Political Campaigns?’, U.K. Const. L. Blog (20th Jul. 2018) (available at https://ukconstitutionallaw.org/))

“The Internet: To Regulate or Not To Regulate” – ILPC written evidence

Tue, 12/06/2018 - 16:24

The ILPC have submitted a piece of written evidence in response to the recent call for evidence on ‘The Internet: To Regulate or Not to Regulate’ from the House of Lords Select Committee on Communications.

The written evidence outlines four key issues of internet regulation: the protection of human rights, the protection of freedom of expression, the use of deceased’s data and the role of the UK as a world leader in internet regulation.

The publication is available to read here.

 

ILPC Annual Conference 2018 Call for Papers

Wed, 06/06/2018 - 14:59

CALL FOR PAPERS 

Transforming Cities with Artificial Intelligence: Law, Policy, and Ethics

We are pleased to announce this call for papers for the Information Law and Policy Centre’s Annual Conference on 23 November 2018 at IALS in London, this year supported by Bloomsbury’s Communications Law journal. You can read about our previous annual events here.

We are looking for high quality and focused contributions that consider information law and policy within the context of improving the governance of the public interest within cities through the use of AI-based systems. Whether based on doctrinal analysis, or empirical research, papers should offer an original perspective on the implications posed by the use of algorithms and data-driven systems for improving the effectiveness of the public sector whilst also ensuring that such processes are governed by frameworks that are accountable, trustworthy, and proportionate in a democratic society.

Topics of particular interest in 2018 include:

  • Explainability and transparency of algorithms
  • Smart cities
  • Data privacy and ethics
  • Internet of Things
  • Cyber security
  • Open Data and Data Sharing
  • Public-private partnerships
  • AI and Digital education
  • The EU General Data Protection Regulation

 

The conference will take place on Friday 23rd November 2018 and will include the Information Law and Policy Centre’s Annual Lecture and an evening reception.

The ILPC is delighted to announce that Baroness Onora O’Neill, a leading philosopher in politics, justice, and ethics, who is also a Crossbench Member of the House of Lords and Associate Fellow of the University of Cambridge Leverhulme Centre for the Future of Intelligence (CFI) will deliver this year’s Annual Lecture.

Attendance will be free of charge thanks to the support of the IALS and our sponsors, although registration is required as places are limited. Registration to the Annual Conference is available here.

The best papers will be featured in a special issue of Bloomsbury’s Communications Law journal, following a peer-review process. Those giving papers will be invited to submit full draft papers to the journal by 1st November 2018 for consideration by the journal’s editorial team.

How to apply:

Please send an abstract of between 250-300 words and some brief biographical information to Eliza Boudier, Fellowships and Administrative Officer, IALS: eliza.boudier@sas.ac.uk by Friday 13th July 2018 (5pm, BST).

Abstracts will be considered by the Information Law and Policy Centre’s academic staff and advisors, and the Communications Law journal editorial team.

About the Information Law and Policy Centre at the IALS:

The Information Law and Policy Centre (ILPC) produces, promotes, and facilitates research about the law and policy of information and data, and the ways in which law both restricts and enables the sharing, and dissemination, of different types of information.

The ILPC is part of the Institute of Advanced Legal Studies (IALS), which was founded in 1947. It was conceived, and is funded, as a national academic institution, attached to the University of London, serving all universities through its national legal research library. Its function is to promote, facilitate, and disseminate the results of advanced study and research in the discipline of law, for the benefit of persons and institutions in the UK and abroad.

About Communications Law (Journal of Computer, Media and Telecommunications Law):

Communications Law is a well-respected quarterly journal published by Bloomsbury Professional covering the broad spectrum of legal issues arising in the telecoms, IT, and media industries. Each issue brings you a wide range of opinion, discussion, and analysis from the field of communications law. Dr Paul Wragg, Associate Professor of Law at the University of Leeds, is the journal’s Editor in Chief.

ILPC Seminar: EU Report on Fake News and Online Disinformation, 30th April 2018

Wed, 09/05/2018 - 12:36

 

The term “fake news” has become a prominent nomenclature in public discourse. Indeed, the idea of “fake news” has brought to the fore a number of key concerns of modern global society, including if and how social media platforms should be regulated, and more critically, the potentially subversive role of online disinformation (and its spread on such platforms) to undermine democracy and the work of democratic institutions.

 

In response to the growing concerns over “fake news” and online disinformation, the European Union established an independent High Level Expert Group in early 2018 to establish policy recommendations to counter online disinformation. In March 2018, the HLEG published a report entitled ‘A Multidimensional Approach to Disinformation’ which sets out a series of short and longer term responses and actions for various stakeholders, including media practitioners and policymakers, to consider in formulating frameworks to effectively addressing these issues. The report recognises that online disinformation is both a complex and multifaceted issue, but is also a symptom of the broader social move toward globalised digitalism.

 

The Information Law and Policy Centre (ILPC) at the Institute for Advanced Legal Studies, University of London, convened a seminar at the end of April to discuss the report of the HLEG and the phenomenon of “fake news”. The seminar forms part of the ILPC’s public seminar series where contemporary issues regarding various aspects of information law and policy are discussed and deliberated with key stakeholders and experts in the field. Held on the evening of the 30th April 2018, the seminar consisted of an expert panel of discussants from both academia and the media.

 

Chaired by ILPC Director, Dr Nóra Ni Loideain, the panel included Professor Rasmus Kleis Nielsen, Professor of Political Communication at the University of Oxford and Director of Research at the Reuters Institute for the Study of Journalism and member of the HLEG, Dr Dimitris Xenos, Lecturer in Law at the University of Suffolk, Matthew Rogerson, Head of Public Policy at Guardian Media Group, and Martin Rosenbaum, Executive Producer, BBC Political Programmes.

 

After an introduction and overview of the report by the Chair, Professor Nielsen (speaking in his personal capacity) began the panel discussion by noting that “fake news” is part of a broader crisis of confidence between the public, press and democratic institutions. Professor Nielsen cautioned against the use of the term “fake news”, highlighting that it is dangerous and misleading, having been sensationalised for political use. Stressing that the HLEG has taken deliberate steps to quell the further populism of the term by not using it in reports and policy documents, he also pointed out that significant steps to redress the issue cannot take place until we have a deeper understanding of just how widespread the problem is or is not. That being said, Professor Nielsen emphasised that in addressing such issues, we must start from the principles we seek to protect, and particularly, freedom of expression and open democracy.

 

Accordingly, Professor Nielsen set out what he saw to be the six key recommendations from the HLEG’s report, noting that these recommendations come as a package to be realised and implemented together.

  • Abandon the term “fake news”;
  • Set aside funding for independent research in order to develop a better understanding of the scope and nature of the issue, noting too that little is known about the issue outside of the West and Global North countries;
  • Call for platform companies to share more data with fact checkers, albeit privacy complaint;
  • Call for public authorities at all level to share machine readable data to better enable professional journalists and fact-checkers;
  • Invest in media literacy at all levels of the population; and
  • Develop collaborative approaches between stakeholders.

 

 

Our second discussant, Dr Xenos, offered a light critique of the HLEG’s report. He agreed with the HLEG group’s focus on ‘disinformation’ relating to materials and communications that can cause ‘public harm’, and its contextual targets involving parliamentary elections and important ‘sectors, such as health, science, finance’, etc. He pointed out that as the institutions of powers (such the EU’s organs) are very often the original sources of such information that is subsequently treated by various media organisations and communicative platforms, the same standards and safeguards should apply to the communications and materials that are produced and published by the institutions. Dr Xenos referred to his contribution and the recent reports of the EU Ombudsman, involving the EU Commission, the EU Council and their very wide range of experts which highlight serious deficiencies in decision and policy-making, undermining the basic democratic safeguards which the HLEG’s report targets, such as transparency, accountability, avoidance of conflicts of interests, and access to relevant information. In this respect, Dr Xenos argues that the proposal for a fact-checking system of media communications covering decisions and policies of the institutions of power is unrealistic when such a system and safeguards do not apply to the original communications, materials and decisions of these institutions that the media may (if and when) subsequently cover. He emphasised the need for an independent academic insight that can offer analysis of events ex ante, in contrast to the traditional ex post analysis of journalism. However, Dr Xenos also said that the role of academics is undermined if appropriate research focus is lacking or there is a conflict of interest – an issue, he believes, concerns also those media organisations and platforms controlled by private corporations. In support of his claims, he referred to a recent study and its subsequent coverage by both the UK and US media. He welcomed the HLEG’ suggestions for access and analysis of platforms’ data and algorithm accountability in the dissemination of information.

 

Responding to Dr Xenos, Dr Ni Loideain noted that the report consistently emphasised the need for evidence-based decision- and policy-making.

 

Following from Dr Xenos’ remarks, Matt Rogerson from the Guardian Media Group emphasised the critical role online disinformation can play in determining the outcome of elections. Matt noted that current politics are marginal, with over a hundred constituencies won or lost with a swing vote of under 10%, which Facebook, for example, can greatly influence, given the high numbers of people who gain their news updates only from this source. Matt noted, too, how in the wake of the Cambridge Analytica scandal, tech companies are becoming increasingly hesitant to be open about their policies and activities. Matt further highlighted that the knowledge and understanding of citizens of the various news agencies and news brands varies, pointing to a study which demonstrated how citizens had a greater trust for the news items offered in certain broadsheet newspapers as opposed to particular tabloid presses.

 

Recognising, therefore, that there is strong media diversity and pluralism of news brands at present, Matt spoke of how this must be preserved and protected. One important issue, Matt noted, was the need for stronger visual queues on platforms such as Facebook, so users could readily distinguish between the branding of the Guardian news items, in comparison to other less-trusted news sources. Matt also raised concerns about the impact of programmatic digital advertising, which effectively decouples brand advertising from the context in which it is seen via online platforms, reducing the accountability of the advertiser as a result.

 

In terms of how to create trust between news organisations and the wider public, Matt drew our attention to the importance of diversity within media houses and social media platforms. Highlighting the lack of gender and ethnic diversity within the tech industry, as well as the related monopoly of Silicon Valley companies over the industry as a whole, Matt noted how the effect of this meant that there was little competition between platforms to raise standards and do the right thing by society. Matt recommended revisiting competition regulation to drive competition and diversity.

 

Martin Rosenbaum from the BBC and speaking in his personal capacity was the final discussant to offer their response on the report. Martin echoed the sentiments of Matt and Professor Nielsen in cautioning against downplaying the issue of disinformation and the effect it can have on society. Martin made note of the fact that disinformation can occur in various forms, including users sharing information despite not knowing or caring whether it is true or otherwise. Moreover, Martin emphasised how disinformation more broadly can foster a lack of trust in trusted news agencies and public institutions by generating, as Professor Nielsen spoke of too, a general crisis of truth.

 

Martin additionally mentioned how the ready consumption and splurge of news users receive on Facebook represents a divorce between the source of the information (for example, the BBC) and the way in which it is distributed and reaches the consumer. Martin explained how this works to undermine the trustworthy-ness of certain kinds of media and information.

 

In speaking of mechanisms through which to counter disinformation, Martin noted the BBC’s code on journalism that is accurate, fair and impartial, which underlies its position as one of the most trustworthy news sources globally. He further noted how the BBC has put in place various accountability mechanisms to handle complaints effectively. Martin additionally spoke of the need for media literacy and involving younger generations in news-making, reporting, and spotting “fake news”.

 

Responding to the claims made earlier by Dr Xenos, Martin assuaged that specialist journalists are most often best placed to fact-check news stories. And lastly, Martin also pointed out how chat apps were also complicit in the dissemination of “fake news” items, and that these platforms were much harder to regulate and monitor in terms of the content they handle.

 

Following from Martin’s contribution, the discussion opened up and various questions were posed to the panel regarding the scope and definition of disinformation and how this issue overlaps the fundamental principle of freedom of expression. There was a broad consensus to steer away from unnecessary government regulation that may impact upon free speech. Other issues raised included the tension between the call for open data and data sharing and the coming into effect of the GDRP this month.

Dr Rachel Adams, ILPC Early Career Researcher

Money, law and courage: the varied roles of the UK Information Commissioner

Wed, 18/04/2018 - 11:49

This post is re-posted from the ICO’s website with kind permission. Original web entry available here

Original script may differ from delivered version

Elizabeth Denham delivered the CRISP (Centre for Research into Information, Surveillance and Privacy) annual lecture 2018 at the University of Edinburgh on 14 March.

She spoke about the many roles she must play as UK Information Commissioner and set out the challenges and opportunities ahead. Introduction

Many thanks for the invitation to speak today. I have a connection with CRISP and William Webster, Kirstie Ball and Charles Raab that predates my time in the UK – British Columbia OIPC is a partner in Big Data Surveillance Project with David Lyon and others in Canada. I have long been aware of the importance of the CRISP doctoral training school.

One of the wonderful aspects of privacy and data protection is the extremely rich and interdisciplinary scholarly research community.

Data Protection raises questions of law, politics, sociology, computer science, communications studies, business and management, psychology, urban studies, geography, and so many other areas of scholarship.

For all the regulatory and legal challenges, privacy and data protection continue to raise fascinating intellectual issues.

CRISP is a wonderful model of interdisciplinary research and training for young researchers.

I am very glad to have received the support of the broad and vibrant academic community involved in research on privacy and surveillance since taking up this job.

I am also proud to have launched a new program to fund independent research and help consolidate the network of privacy researchers in the UK.

I am hopeful that this will continue for many years.

Money, law, courage

The title, Money, Law and Courage – signifies, of course, that the contemporary data protection authority (DPA), of which the ICO is the largest in the world in terms of personnel and budget, cannot do its work without a clear legislative framework, the necessary technical and financial resources and the courage to do our jobs well. My office is responsible for the effective enforcement of no fewer than eleven statutes and regulations.

They say: “Money makes the world go around . . .”

Well, we have a budget of £24 million pounds, following the introduction of the new funding model this will be £34m in 2018/19. We’ve been busy over the last year recruiting more staff and currently have a headcount of around 500.

We expect staffing numbers to continue to increase, passing 600 by 2019 increasing to an approximate FTE of 650 during 19/20.

We will be assessing demand as the GDPR goes live and beyond, adjusting our plans accordingly. To give you a sense of we are fixed now, we’ve got around 200 case-workers working on issues raised by the public, a 60-strong enforcement department taking forward our investigations and a similar number charged with developing our information rights policies and engaging with the stakeholders and organisations that need to implement them.

Coming from an office of under 40 in British Columbia, the scale of the management tasks is obviously far more complex and challenging.

Expectations

I had – I have – Great Expectations for this job.

But there’s one aspect of the job that I did not expect, and it stems from the very governance structure of the ICO.

My job combines the role of Commissioner, which has a variety of regulatory and quasi-judicial functions, with that of a CEO. It is based on the “Corporation Sole” Model.

That’s highly unusual for a large regulatory body like the ICO.

The implications of this model are that I perform a wide range of management functions in my capacity as the ICO’s CEO.

I would say that as well as my regulatory role, I must also work alongside my excellent staff on administrative duties involving organisation, finance, human resources, and negotiations with the unions.

Much activity of late has been about recruitment and retention issues. I am pleased to report that the Treasury has given me pay flexibility to address the gap in wages when compared to the external market.

Everyone is looking for data protection expertise.

I am also looking at new ways to bring in talent – through secondments from the private and government sectors, and through technology fellowships for post-doctoral experts. 

Toolbox

It’s not just about the money, it’s also about the resources. And I have many tools in the toolbox. 20 years ago, the toolbox was not global.

Now there is a common recognition that all DPAs need to make creative use of all the tools in the toolbox.

And as in a toolbox, each tool (the hammer, the drill, the screwdriver, the chisel) is suited to a different purpose.

But most of these tools can be used separately, and not in conjunction with another. Throw away the screwdriver or the drill, and the hammer still remains and is still capable of driving in the nail.

At the same time, it cannot drill the hole, or screw in the screw – assuming, of course, that the user can tell the difference.

For the person with the hammer, everything can tend to look like a nail, right?

The tools in the privacy toolbox, however, are designed to be used in conjunction with one another. They do form an integrated package, all of which are now necessary and none sufficient on their own.

Of course the tools are all for nothing if the Commissioner and her team don’t have a good plan for what we are building and why.

The Law

So now to the law. This global repertoire of instruments is reflected in the General Data Protection Regulation (GDPR), that will apply in the UK from May: privacy by default and design; codes of practice; privacy seals; Data Protection Impact Assessments (DPIA); data protection officers; accountability mechanisms for good privacy management.

The Europeans have made vigorous efforts to learn from abroad and to embrace policy instruments that were pioneered in other countries, such as Canada, and to incorporate them into the GDPR.

Positive results in data protection are not just attributable to decisions from the top.

They are “co-produced” by a widespread network of actors (regulators, businesses, consumer organisations, media, researchers, and individuals).

I see the ICO as the facilitator of this network, a convener as much as the regulator.

My varied roles

Over ten years ago, Charles Raab and Colin Bennett published The Governance of Privacy: Policy Instruments in Global Perspective1.

In that book, they defined the contemporary roles of the DPA as: ombudspersons, auditors, consultants, educators, policy advisers, negotiators, enforcers, and international ambassadors.

Different authorities played these roles in different ways and with shifting emphasis over time. I, and my staff, also play these roles.

Data Protection “Ombudsman”

Any DPA has to be attentive to its main clients – the citizenry who may have concerns and questions about how their personal data is captured and processed.

We all play the classic role of the “ombudsperson.”

Demand for this role is high and increasing. In 2016-17, the ICO received and dealt with over 18000 data protection complaints, 90% of which were resolved within three months of receipt.

This year we will be over the 21,000 mark and next year we expect over 24,000 complaints as people become more aware of their rights.

Prominent concerns include complaints about timely and comprehensive access to personal information, about the use of CCTV, and take-down requests from search engines. We are dealing with a wide range of complaints, most relate to general business, including the financial and insurance sectors, but they also cover the important relationship and services between the state and the citizen, including local and central government, health, policing and education.

Auditor

The auditing role is central, and will become more so under GDPR. That embraces more proactive assessments of organisational accountability and expands our work to the private sector in a way not seen before. But we now also have a more nuanced understanding of what a data protection audit actually entails, and make important distinctions between full-blown audits, risk reviews and advisory audits.

In 2017-18, we delivered 24 full audits providing advice and recommendations, 37 information risk reviews, 18 follow-up audits, and 47 advisory audits to SMEs.

Consultant

We are also consultants and often give advice to organisations that come to us with requests to comment on new products and services.

We are happy to hear of new developments and to give advice about whether new systems are compliant with the law, and about how to minimize risks to privacy. This role too will increase under the GDPR – organisations will be increasingly pressured to get the advice of regulators before systems are developed and services are launched.

They will be expected to implement privacy by design, and by default, and will need advice about how to accomplish those goals.

In this regard, my office is establishing a regulatory “sandbox” that provides beta testing of new initiatives in private and public sectors.

This strategy allows us to keep up with new technological developments, and at the same time ensure that appropriate protections and safeguards are built-in.

This is what the law requires.

The strategy is based on the strong belief that privacy and innovation are not mutually exclusive. New technology is both a risk and an opportunity. The strategy also allows us to boost the technical expertise of our staff.

Educator

I spend a lot of my time in education – both of the general public and of organisations. We have launched a guide to the GDPR, which has had over 3 million hits since publication.

I have given several dozen speeches to organisations over the last two years, and use those as an opportunity to spread the word to key audiences. We are also active in social media, and broadcast podcasts on significant questions. I also write blogs on key issues – including a series of GDPR myth busting blogs.

In April, we will launch a public education campaign, Your Data Matters, to educate the public on their new rights under the law.

The campaign is the ICO’s but we are collaborating with private sector and civil society partners to assist us in disseminating information about the use of personal data in everyday life, complete with real-life scenarios and story-telling content. The aim is to increase the public’s trust and confidence in how organisations use their data. And that’s my priority.

Policy Adviser 

With the GDPR, and Brexit, I have spent a lot of time with parliamentary committees, ministerial staff in giving policy advice about legislative and regulatory change. I spend around 2-3 days a week in London since I took up the position because of heavy parliamentary and Whitehall business. We have opened a London office and formed a parliamentary team.

Negotiator

My staff and I need strong negotiation skills staking out principled positions, but being prepared to compromise. We negotiate with government agencies, and with corporations. We negotiate, for instance, over codes of practice, such as the one currently being developed on direct marketing.. The role of negotiator is critical in an area of law where there are often no clear black and white answers, and few “bright-line” rules.

We are also involved in negotiation with other regulators and oversight agencies. There are many other players in this space – from the NCSC in matters of cyber security, the Surveillance Camera Commissioner to the Childrens’ Commissioners. In fact I met with Bruce Adamson the Children and Young People’s Commissioner Scotland just this morning.

We work hard to develop a framework that allows us to work in a co-ordinated manner in the best interest of UK citizens.

I played all those roles in Canada (in Ottawa and in British Columbia). But they are now played out on a bigger stage, and with far greater implications.

There are two roles I’ve yet to speak about – the enforcer and the international ambassador.

These are far more prominent in my role as UK Information Commissioner than they ever were in Canada. And these are the ones that I would like to discuss in greater detail in the rest of this talk.

Enforcement

My office possesses a greater range of enforcement and sanctioning powers than those in Canada.

As an illustration, companies could find themselves subject to severe penalties for not complying with the GDPR, which states the maximum amounts that companies could be liable to as £17m, or 4% of the organisation’s total annual worldwide turnover in the preceding year, whichever is higher.

We also have powers to suspend or amend processing or transfers.

The enforcement notice can be more intrusive than the fine. These are significant fining and directing powers, and they have to be to be used predictably, consistently and judiciously.

To that end, my office is developing a Regulatory Action Policy to provide greater clarity and focus to our roles.

So, when I look at the contemporary inventory of regulatory tools at my disposal, it is now a long list that operates on a sliding continuum, or hierarchy of regulatory action.

That’s quite a list, right?

We aspire to select the most appropriate regulatory instrument based on a risk assessment of the nature and seriousness of the breach, the sensitivity of the data, the number of individuals affected, the novelty and endurance of the concerns, the larger public interest, and whether other regulatory authorities are already taking action in respect of the matter.

We also reserve the right to take into account the attitude and conduct of the organisation, whether relevant advice has been heeded, and whether accountability measures have been taken to mitigate risk.

Now might be a good time to tell you about our ongoing investigation into the use of personal data by political parties and campaigns. The use of data analytics for political purposes has not been examined by any other DPA.

It is a complex investigation involving over 30 organisations including political parties, data analytics companies, and social media platforms.

We hope to shed light on the mysteries and complexities of the data driven campaign and election. And we hope that our work will be an important contribution to the wider legal and ethical discussions about the use of personal data to mobilize voters.

International

All privacy and data protection commissioners are increasingly international ambassadors for their domestic data protection regimes and approaches.

We advance the interests of our citizens, and also to some extent our businesses, in a variety of regional and international forums.

As UK Information Commissioner, I am now of course on a far more visible international stage then I ever was in Canada.

To help navigate these uncertain international waters, my office has published an international strategy that recognizes the importance of agility in an ever changing world.

As you know, the GDPR will apply in the UK as of May 25th 2018. We have been giving guidance to British businesses on how to comply with the GDPR, on issues such as automated decision-making, profiling, personal data breach notification, and the processing of data on children.

We have also tried to explode some of the unfortunate myths concerning compliance.

As we have a more longstanding experience with some of the instruments in the GDPR, we hope that our practical guidance can have an influence beyond the UK.

At the same time, we have been trying to influence the new Data Protection Bill, which had its Second Reading debate in the Commons last week, and which aims to align UK law with the GDPR.

Overall, I am encouraged that the interests of the government, UK industry and civil society are broadly aligned around the need to apply the provisions of the GDPR within the UK with minimum divergence. The government has prioritised the issue of data protection and data flows in the Brexit negotiations because data underpins the digital economy, trade and criminal justice.

I am striving for what I have called a “holy trinity of outcomes”: uninterrupted data flows to Europe and the rest of the world; high standards of data protection for UK citizens and consumers, wherever their data resides; and legal certainty for business.

Brexit

We intend to play a full role in EU institutions until the UK leaves the EU, we are preparing for the post-Brexit environment in order to ensure that the information rights of UK citizens are not adversely affected.

But several questions remain, and which will be inescapably determined by the final contours of the relationship between the UK and the European trading bloc. There is agreement that there will be a transition period – necessary to untangle a 40-year regulatory regime. During the transition period, to avoid a cliff edge harmful for business and citizens, the intent is that the regulatory regimes – from data protection to aviation, food standards and the environment will be maintained.

When it comes to the arrangements post-Brexit for international transfers, achieving a bespoke agreement on data flows in the commercial and security sectors, or an adequacy finding from the European Commission may be the most elegant ways of ensuring the continued frictionless flow of data between the EU and the UK.

And there is no doubt that having domestic laws that achieve a high standard of data protection, harmonized with those of the EU, will be a significant advantage in a special arrangement.

Should the UK leave the EU without a data deal in place, EU organisations will need to have binding contractual arrangements in place every time they wish to share new information and data with their UK partners.

Even with the GDPR translated into UK law, interpretation of the law is the responsibility of the ICO, and the UK courts.

Our interpretation might be influenced by decisions made through consistency mechanisms within the GDPR and the European Data Protection Board, but there is no guarantee – leading to possible divergences of interpretation and confusion for companies that do business in the UK and the EU.

Perhaps the most significant “unknown” from my point of view is the exact nature of relationship with our DPA colleagues across Europe.

Is the ICO going to have a seat on the European Data Protection Board with voting rights or will we be an observer without voting rights; or not even allowed to have a seat around the table? Is the UK going to be a partner, helping to set policy, or will we have the status of a third country – like Canada or Japan?

And then there is the “onward transfer” problem of how to protect the data of EU citizens exported from UK organisations to other areas of the world, and which will be a critical issue in the determination of adequacy. Will the UK have a mirror agreement, similar to that enjoyed currently by Switzerland? Or will UK businesses have to default to various accountability mechanisms, such as binding corporate rules.

And what, then, of data flows from the UK to the United States? Will there be a separate UK-US Privacy Shield arrangement?

There is uncertainty over the legal arrangements in the transition period and the repercussions of this unprecedented process, but the one certainty is that the European Union will continue to advance the highest standards of protection for the personal data of people in the EU, and the UK shares and has committed to maintain these high standards.

I expect that when it comes to rights such as the right to privacy and data protection, the EU and the UK will continue to pursue common strategies; and I expect to maintain substantial dialogue and work with my EU colleagues. The ICO is the largest DPA in Europe and contributes heavily to the work of the Article 29 Working party. Its influence should, and will, continue to be felt post-Brexit.

Courage

But none of those resources, legal tools and relationships are sufficient, unless the Commissioner has the courage of leadership and inspires teamwork to advance the rights of UK citizens in the face of some strong global, technological and organisational pressures. But courage is not just manifested in enforcement – in using the legal powers of the office to punish and sanction.

It is also a matter of hard work, commitment, perseverance and a skill in knowing what instrument to use, at what time.

Any data protection or privacy Commissioner has to be pragmatic, and be aware of the various policy tools and instruments at his or her disposal. At a superficial level, the job does involve knowing when to use the ‘carrot’ or ‘the stick’. But those choices are now more complex.

So that simple distinction may be misleading – there are now many types of ‘carrot’ and many types of ‘stick’.

At the end of the day, all privacy and data protection commissioners are looking for an ounce of prevention.

That has been generally argued by observers of the work of privacy commissioners, going back to David Flaherty’s 1989 pioneering book, Protecting Privacy in Surveillance Societies2.

Offices like mine, like the ICO are more effective when they can act proactively, and can give general policy guidance to minimize the needs for complaints, and for enforcement actions.

Prevention is better than cure.

But this is a goal that is not easy to realise, when the office is continually expected to respond to the unexpected: the data breach, the high-profile media story, the sudden policy initiative from government, the significant court decision and so on.

We do try to operate an intelligence function that gathers data on the implementation of data protection, surveys companies and monitors practices.

We have a new team that focuses on priority files, and these cases, investigations or audits are run by cross office groups directed by the senior leadership team. We are then able to understand any general patterns and take proactive measures accordingly.

We also work with civil society and consumer groups – and take their complaints about systemic issues.

GDPR will give us more tools for education, for encouraging accountability, for building in privacy by design and by default. Of course, it is essential to keep the legal sanctions in the background, be ready to use them, and make organisations aware that we are ready to use them.

That general conclusion about the importance of the proactive and general policy work, over the more reactive enforcement work, was also true of my work in Canada and BC.

It is just that I now have more money, more staff, more laws, more tools in my toolbox, a larger audience, a brighter media spotlight and a more extensive range of organisations to regulate.

So, I have the resources to do the job and the law to back me up.

I’ll let you be the judge as to whether I and my team have the courage!

References

1 C.J Bennett and C.D Raab, The Governance of Privacy: Policy Instruments in Global Perspective (Cambridge MA: MIT Press, 2006).

2 D.H Flaherty, Protecting Privacy in Surveillance Societies: The Federal Republic of Germany, Sweden, France, Canada and the United States (Chapel Hill: University of North Carolina Press, 1989).

Event: Joining The Circle: capturing the zeitgeist of ‘Big Tech’ companies, social media speech and privacy

Mon, 09/04/2018 - 11:40

Event: Joining The Circle: capturing the zeitgeist of ‘Big Tech’ companies, social media speech and privacy

Professor Robin Barnes (Global Institute of Freedom and Awareness) and Peter Coe (Aston University) have organised a panel session at the Inner Temple, London, on Wednesday 23 May 2018. The session is entitled: ‘Joining The Circle: capturing the zeitgeist of ‘Big Tech’ companies, social media speech and privacy’.

It is based on Dave Egger’s book, The Circle, which tells the story of an all-powerful new media company that seeks to totally monopolize its market place and remake the world in its image. Although fictional, the book captures the zeitgeist surrounding ‘Big Tech’ companies exerting ever-increasing influence over our lives by altering our perceptions and expectations of the media (including citizen journalists), free speech and privacy, and how our personal information is used and protected.

The panel consists of seven experts from academia and practice who are currently engaging with these issues, including: Peter Coe, Professor Barnes; Dr Paul WraggRebecca Moosavian (both University of Leeds), Dr Paul Bernal (University of East Anglia), Dr Laura Scaife (BBC) and Jacob Rowbottom (University of Oxford). Professor Ian Cram (University of Leeds) will open the conference. These experts will present and discuss their thoughts on these issues, and their potential implications, both now and in the future.

This event will be of interest to practising and academic lawyers with expertise in Media Law generally (including free speech, privacy and data protection), journalists and other media professionals, and those engaged in research and teaching relating to journalism and the wider media.

The event is free to attend, and includes lunch and refreshments. Registration will begin at 9.00am and the panel will finish at around 4.00pm. Following the close of the panel session there will be an opportunity to network with the panel members and other delegates.

Delegate places are limited. Therefore, places are available on a first come first served basis. If you would like to attend, please email Peter Coe as soon as possible: p.coe@aston.ac.uk.

 

New approach to media cases at the Royal Courts of Justice is a welcome development

Mon, 09/04/2018 - 11:24

Guest post by Dr Judith Townend.

This is an edited version of an article which first appeared in Communications Law journal, volume 23, issue 1 (Bloomsbury Professional) and PA Media Lawyer.

In 2012 Mr Justice Tugendhat, ahead of his retirement in 2014, made a plea for more media specialist barristers and solicitors to consider a judicial role: “As the recruiting posters put it: Your country needs you.”

#
He emphasised the particular burden of freedom of expression cases, which require judges, for example, to consider the rights of third parties, “even if those third parties choose not to attend court” and to provide reasons for the granting of injunctions at very short notice.

Without expert knowledge of the applicable law, this is no easy task. Fortunately, media law cases have not fallen apart with the respective retirements of Sir Michael Tugendhat and Sir David Eady, and recent specialists to join the High Court include Mr Justice Warby in 2014, and Mr Justice Nicklin in 2017 – both formerly of 5RB chambers.

The arrival of Mr Justice Warby, who was given the newly created role of Judge in charge of the Media and Communications List, has provided a welcome opportunity to propose changes to the procedure of media litigation in the Queen’s Bench Division, where the majority of English defamation and privacy claims are heard.

Since taking on responsibility for the cases involving one or more of the main media torts – including defamation, misuse of private information and breach of duty under the Data Protection 1998 – Mr Justice Warby has spoken about his hopes and plans for the list, and also conducted a consultation among those who litigate in the area, as well as other interested parties.

The consultation considered the adequacy of Civil Procedure Rules and Practice Directions; the adequacy of the regime for monitoring statistics on privacy injunctions; and support for the creation of a new committee.

As a socio-legal researcher rather than legal practitioner, my interest was piqued by the latter two questions.

For some time, I have been concerned that efforts by the Judiciary and the Ministry of Justice to collect and publish anonymised privacy injunction data have been insufficient, and also that the availability of information about media cases could be improved more generally.

My own efforts to access case files and records in 2011-13, to update research conducted by Eric Barendt and others in the mid 1990s, and to interrogate assertions of defamation’s “chilling effect”, proved largely unsuccessful and I was astonished how rudimentary and paper-based internal systems at the Royal Courts of Justice appeared to be.

Although public observers are entitled to access certain documents – such as claim forms – the cost and difficulty in locating claim numbers prohibits any kind of useful bulk research which would allow more sophisticated qualitative and quantitative analysis of media litigation.

I jumped, therefore, at the opportunity of the consultation to raise my concerns about the injunctions data, and to support the creation of a new user group committee.

My submission, with Paul Magrath and Julie Doughty, on behalf of the Transparency Project charity, made suggestions for revising the injunctions data collection process, including the introduction of an audit procedure to check information was being recorded systematically and accurately.

Following the consultation, Mr Justice Warby held a large meeting at the Royal Courts of Justice for all respondents and other interested parties at which he shared a table of proposals from the consultation, provisionally ranked as “most feasible”, “more difficult” and “most difficult”.

The last category also included proposals which would require primary legislation, which would be a matter for Parliament rather than the Judiciary.

I was pleased that our initial proposals on the transparency of injunctions data have been deemed practical and feasible in the first instance.

Also considered achievable are some of the proposals related to case management and listings, updating the pre-action protocol (PAP), the Queen’s Bench Guide, and civil practice directions in light of developments in privacy, data protection and defamation litigation and press regulation (not least to reflect the Defamation Act 2013).

This meeting also established the creation of a new Media and Communications List User Group (MACLUG) to which a range of representatives have been appointed.

The group comprises members of the Bar and private practice solicitors (including both claimant and defendant specialists), in-house counsel, clerks, and a costs practitioner.
Additionally, I have joined as a representative of public interest groups – i.e. those engaged in academic research and third sector work. The new committee met for the first time at the end of 2017, and members have formed smaller working groups to take forward the “feasible” proposals, which will be discussed with our respective constituencies in due course, and where relevant, eventually proposed to the Civil Procedure Rule Committee to consider.

In a speech to the Annual Conference of the Media Law Resource Center in September last year Mr Justice Warby identified his overall aims for the “big picture” and landscape of media litigation: to resolve disputes fairly, promptly, and at reasonable cost.

All of which were “easier said than done”, in his words. Quite so. But it is right that it should be attempted, and with judicial input where appropriate.

Mr Justice Warby’s efforts to date are to be applauded, and in particular, his open approach in addressing some of the flaws and inconsistencies of current practice, and evaluating structural and systemic issues.

That said, a committee formed by the judiciary is constrained in its remit, quite rightly. The consideration of changes to primary legislation should fall to Parliament.

It is therefore important that media law practitioners and other stakeholders should also work with the Ministry of Justice and HM Courts and Tribunals Service to inform ongoing work on courts modernisation, and push for wider consultation and involvement in reforms. A further challenge is to persuade government and parliamentarians to take on any issues requiring changes to legislation.

Part I of the Leveson Inquiry addressing, in part, the relationship between media proprietors, editors and politicians showed that the process of consultation on public policy affecting the news media has been subject to undue influence by certain private interests, and insufficiently transparent.

To this end, perhaps the new Lord Chancellor and Secretary of State for Justice, David Gauke MP, and the new Secretary of State for Digital, Culture, Media and Sport, Matt Hancock MP, might consider ways in which they can consult more openly and fairly in their development of policy and draft legislation on freedom of expression, reputation and privacy.

Dr Judith Townend is lecturer in media and information law at the University of Sussex and a member of the Queen’s Bench Division Media and Communications List User Group Committee.

Featured image: courtesy of Dave Pearce (@davebass5) on Flickr.

LSE Experts on T3: Omar Al-Ghazzi

Mon, 09/04/2018 - 10:47

This post is re-posted from the LSE Media Policy Project Blog.

As part of a series of interviews with LSE Faculty on themes related to the Truth, Trust and Technology Commission, Dr Omar Al-Ghazzi  talks to LSE MSc student Ariel Riera on ‘echo chambers’ in the context of North Africa and the Middle East.

AR: The spread of misinformation through social media is a main focus of the Commission. Are there similar processes in the Middle East and in the North Africa region?

OA: Questions about trust, divisions within society, and authoritarian use of information or what could be called propaganda are very prevalent in the Middle East and North Africa. So in a way a lot of the issues at hand are not really new if we think about communication processes globally. Much of the attention that misinformation has been getting is in relation to Trump and Brexit. But Syria, for instance, is actually a very productive context to think through these questions, because with the uprising and the war, there was basically an information blackout where no independent journalist could go into the country. This created an environment where witnesses and citizen journalists and activists fill that gap. So it is now a cliché to say that the war in Syria is actually the most documented war. But all that information has not led to a narrative that people understand in relation to what’s happening. And that has to do with trust in digital media and the kind of narratives that the government disseminates. The echo chamber effect in the way people access information from online sources they agree with is also as prevalent in the Middle East as it is globally.

AR: And in these countries, who are the perpetrators of fake news and misinformation and what are the channels?

OA: It is a complicated question because if we talk about the war in Syria, the communication environment is much more complex than the binary division between fake and real. For instance, I am interested in the reporting on the ground in areas that are seeing or witnessing war and conflict. I will give you an example. Now in the suburbs of Damascus, where there is a battle between rebels and the government, there are several cases of children and teenagers doing the reporting. So how should this be picked up by news organisations, and what are the consequences? CNN recently called one of the teenagers based in Eastern Ghouta, Muhammed Najem, a ‘combat reporter’. What are the ethical considerations of that? Does that encourage that teenager to take for instance more risks to get to that footage? How is what he produces objective if first he has obviously no journalism training as a very young person and second he is in a very violent context where his obvious interest lies in his own survival and in getting attention about his and his community’s suffering. He has a voice that he wants to be heard and which should be heard. But why is the expectation, if he is dubbed a ‘combat report’, that what he produces should be objective news reporting?

Beyond this example of the complex picture in war reporting, I think the Middle East region also teaches us that when there is a lack of trust in institutions of any country in the world, when there is division in society about a national sense of belonging, about what it means to be a patriot or a traitor, that would produce mistrust in the media. Basically, a fractured political environment engenders lack of trust in media, and engenders that debate around fake or real. So there is a layer beyond the fakeness and realness that’s really about social cohesion and political identity.

AR: Nationalist politicians all over the world have found in social media a way to bypass mainstream media and appeal directly to voters. What techniques do they use to do this?

OA: Perhaps in the Middle East you don’t find an example of a stream of consciousness relayed live on Twitter like the case is with President Trump, but, like elsewhere in the world, politicians are on Twitter and even foreign policy is often communicated there. Also, a lot of narratives that feed into conflicts, like the Arab-Israeli conflict, take shape on social media. So without looking at social media you certainly don’t get the full picture even of the geopolitics in the region. Without social media, one would not grasp how government positions get internalised by people and how people contribute- whether by feeding into government policies, or maybe resisting them as well.

AR: Based on your observations in North Africa and the Middle East, can mistrust or even distrust of mainstream media outlets be a healthy instinct? For example, if mainstream media is a place where only one voice is heard.

OA: Even though a lot of the media are politicised in the Arab world because they are government owned, people have access to media other than their own governments because of a common regional cultural affiliation, a shared language and the nature of the regional media environment. So actually people in the Arab world are sophisticated media users because they have access to a wide array of media outlets. Of course, there are outlets that are controlled by governments wherever one may be situated and things vary between different countries, but audiences can access pan-Arab news media such as Al Jazeera, Al Arabiya and Al Mayadeen. They have access to a wide array of online news platforms as well as broadcast news. So you really have a lot of choices. If you are a very informed audience member you would watch one news outlet to know, let’s say, what the Iranian position on a certain event is, and then you watch a Saudi funded channel to see the Saudi. But of course, most people don’t do that because you know they just access the media that offers the perspective they already agree with.

We have to remember that in the context of the Middle East there is a lot of different conflicts, there is war, which obviously heightens the emotions of people and their allegiances and whatever their worldview is. So we are also talking about the context that, because of what is happening on the ground, people feel strongly about their political positioning which feeds into the echo chamber effect.

AR: You wrote that, at least linked to the Arab Spring, there was a ‘diversity of acts referred to as citizen journalism’. What differentiates these practices from the journalism within established media?

OA: Basically, in relation to the 2011 Arab uprisings, there were a lot of academic and journalistic approaches that talked about how these uprisings were Facebook or Twitter revolutions, or only theorising digital media practices through the lens of citizen journalism. But I argued that we cannot privilege one lens to look at what digital media does on the political level because a lot of people use digital media, from terrorist organisations to activists on the ground to government agents. So one cannot privilege a particular use of digital media and focus on that and make claims about digital media generally, when actually the picture is much more complicated and needs to be sorted out more.

Of course the proliferation of smartphones and social media offered ordinary people the opportunity to have their own output, to produce witness videos or write opinions. It is a very different media ecology because of that. However, we cannot take for granted how social media is used by different actors. In social science we have to think about issues of class, literacy, the urban rural divide, the political system, the media system. And, within that complexity, locate particular practices of social media rather than make blanket statements about social media doing something to politics generally and universally.

Dr Omar Al-Ghazzi is Assistant Professor in the Department of Media and Communications at LSE. He completed his PhD at the Annenberg School for Communication, the University of Pennsylvania, and holds MAs in Communication from the University of Pennsylvania and American University and a BA in Communication Arts from the Lebanese American University.

 

 

Recent developments on freedom of expression, Dr David Goldberg

Fri, 16/03/2018 - 10:23

This post brings us some recent developments on freedom expression from Dr David Goldberg, Senior Visiting Fellow, Institute of Computer and Communications Law in the Centre for Commercial Law Studies, Queen Mary, University of London, and member of the Information Law and Policy Centre’s Advisory Board.

Dr Goldberg has recently co-organised a symposium at the Southwestern Law School, Los Angeles, on “Fake News and Weaponized Defamation”. The event took place on the 26th January 2018. Further information on the event can be found at: https://www.swlaw.edu/curriculum/honors-programs/law-review-journals/journal-international-media-entertainment-law/global. Photos from the event are available at https://flic.kr/s/aHsmfxk8dL.

Dr Goldberg delivered a presentation at the event calling for enhancing media literacy, and cautioning against over-relying on the law to deal with the so-called phenomenon of fake news. Dr Goldberg’s presentation will be available in a forthcoming publication.

In addition, Dr Goldberg has recently published a chapter entitled ‘Dronalism, Newsgathering Protection and Day-to-day Norms’ in Responsible Drone Journalism (2018) edited by Astrid Gynnild and Turo Uskali. The book is available at https://www.crcpress.com/Responsible-Drone-Journalism/Gynnild-Uskali/p/book/9781138059351.

Lastly, following up on the ‘Freedom of Information at 250’ event held at the Free Word Centre in December 2016 with the support of the Information Law and Policy Centre at the Institute of Advanced Legal Studies, and the Embassies of Sweden and Finland, the publication Press Freedom 250 Years: Freedom of the Press and Public Access to Official Documents in Sweden and Finland – A Living Heritage from 1766 is now available in English. The publication of this translation has been in large part due to the efforts of Dr David Goldberg, Mark Weiler and Staffan Dalhoff. The book was launched on 2nd December 2016 at the Swedish Parliament, and the free PDF is available at http://www.riksdagen.se/globalassets/15.-bestall-och-ladda-ned/andra-sprak/tf-250-ar-eng-2018.pdf.

To order the book for libraries, contact:
Riksdag Printing Office, SE 100 12 Stockholm
E-mail: ordermottagningen@riksdagen.se

ILPC Annual Conference and Annual Lecture 2017 Children and Digital Rights: Regulating Freedoms and Safeguards

Tue, 13/03/2018 - 17:07

ILPC Annual Conference and Annual Lecture 2017
Children and Digital Rights: Regulating Freedoms and Safeguards

The Internet provides children with more freedom to communicate, learn, create, share, and engage with society than ever before. Research by Ofcom in 2016 found that 72 percent of young teenagers in the UK have social media accounts. Twenty percent of the same group have made their own digital music and 30 percent have used the Internet for civic engagement by signing online petitions or by sharing and talking about the news.

Interacting within this connected digital world, however, also presents a number of challenges to ensuring the adequate protection of a child’s rights to privacy, freedom of expression, and safety, both online and offline. These risks range from children being unable to identify advertisements on search engines to being subjects of bullying or grooming or other types of abuse in online chat groups.

Children may also be targeted via social media platforms with methods (such as fake online identities or manipulated photos and images) specially designed to harm them or exploit their particular vulnerabilities and naivety.

These issues were the focus of the 2017 Annual Conference of the Information Law and Policy Centre (ILPC) based at the Institute of Advanced Legal Studies, University of London. The ILPC produces, promotes, and facilitates research about the law and policy of information and data, and the ways in which law both restricts and enables the sharing and dissemination of different types of information.

The ILPC’s Annual conference was one of a series of events celebrating
the 70th Anniversary of the founding of the Institute of Advanced Legal Studies. Other events included the ILPC’s Being Human Festival expert and interdisciplinary panel discussion on ‘Co-existing with HAL 9000: Being Human in a World with Artificial Intelligence’.

At the 2017 ILPC Annual Conference, leading policymakers, practitioners, regulators, key representatives from industry and civil society, and academic experts examined and debated the opportunities and challenges posed by current and future legal frameworks and the policies being used and developed to safeguard these freedoms and rights.

These leading stakeholders included Rachel Bishop, Deputy Director of Internet Policy at the Department of Digital (DCMS); Lisa Atkinson, the Information Commissioner’s Office (ICO) Head of Policy; Anna Morgan, Deputy Data Protection Commissioner of Ireland; Graham Smith, Internet law expert at Bird & Bird LLP), Renate Samson, former CEO of privacy advocacy organisation Big Brother Watch, and Simon Milner, Facebook’s Policy Director for the UK, Africa, and Middle East.

The legal systems under scrutiny included the UN Convention on the Rights of the Child and the related provisions of the UK Digital Charter, and the UK Data Protection Bill, which will implement the major reforms of the much anticipated EU General Data Protection Regulation (2016/679) (GDPR) which will soon enter into force on 25 May 2018. Key concerns expressed at the conference by delegates included the effectiveness in practice and lack of evidence-based policy for the controversial age of consent for children and their use of online information services provided for under the GDPR.

Further questions were raised with respect to what impact in practice will there be for children’s privacy, their freedom of expression, and their civil liberties as a result of the new transparency and accountability principles and mechanisms that must be implemented by industry and governments when their data processing involves the online marketing to, or monitoring, of children.

Given the importance and pertinence of these challenging and cutting-edge policy issues, the Centre is delighted that several papers, by regulators and academic experts from institutions within the UK, the EU, and beyond, which were presented, discussed, and debated at the conference’s plenary sessions and keynote panels, feature in a special issue of the leading peer-review legal journal of Communications Law, published by Bloomsbury Publishers.

This special issue also includes the Centre’s 2017 Annual Lecture delivered by one of the country’s leading children’s online rights campaigners, Baroness Beeban Kidron OBE, also a member of the House of Lords and film-maker, on ‘Are Children more than Clickbait in the 21st Century?’

For IALS podcasts of the 2017 ILPC Annual Lecture delivered by Baroness Kidron and presentations from the Annual Conference’s Keynote Panel, please see the IALS website at: http://ials.sas.ac.uk/digital/videos.

Nora Ni Loideain
Director and Lecturer in Law,
Information Law and Policy Centre,
IALS, University of London.

Pages