Information Law & Policy Centre Blog
This post is re-posted from the ICO’s website with kind permission. Original web entry available here.
Original script may differ from delivered versionElizabeth Denham delivered the CRISP (Centre for Research into Information, Surveillance and Privacy) annual lecture 2018 at the University of Edinburgh on 14 March.
She spoke about the many roles she must play as UK Information Commissioner and set out the challenges and opportunities ahead. Introduction
Many thanks for the invitation to speak today. I have a connection with CRISP and William Webster, Kirstie Ball and Charles Raab that predates my time in the UK – British Columbia OIPC is a partner in Big Data Surveillance Project with David Lyon and others in Canada. I have long been aware of the importance of the CRISP doctoral training school.
One of the wonderful aspects of privacy and data protection is the extremely rich and interdisciplinary scholarly research community.
Data Protection raises questions of law, politics, sociology, computer science, communications studies, business and management, psychology, urban studies, geography, and so many other areas of scholarship.
For all the regulatory and legal challenges, privacy and data protection continue to raise fascinating intellectual issues.
CRISP is a wonderful model of interdisciplinary research and training for young researchers.
I am very glad to have received the support of the broad and vibrant academic community involved in research on privacy and surveillance since taking up this job.
I am also proud to have launched a new program to fund independent research and help consolidate the network of privacy researchers in the UK.
I am hopeful that this will continue for many years.Money, law, courage
The title, Money, Law and Courage – signifies, of course, that the contemporary data protection authority (DPA), of which the ICO is the largest in the world in terms of personnel and budget, cannot do its work without a clear legislative framework, the necessary technical and financial resources and the courage to do our jobs well. My office is responsible for the effective enforcement of no fewer than eleven statutes and regulations.
They say: “Money makes the world go around . . .”
Well, we have a budget of £24 million pounds, following the introduction of the new funding model this will be £34m in 2018/19. We’ve been busy over the last year recruiting more staff and currently have a headcount of around 500.
We expect staffing numbers to continue to increase, passing 600 by 2019 increasing to an approximate FTE of 650 during 19/20.
We will be assessing demand as the GDPR goes live and beyond, adjusting our plans accordingly. To give you a sense of we are fixed now, we’ve got around 200 case-workers working on issues raised by the public, a 60-strong enforcement department taking forward our investigations and a similar number charged with developing our information rights policies and engaging with the stakeholders and organisations that need to implement them.
Coming from an office of under 40 in British Columbia, the scale of the management tasks is obviously far more complex and challenging.Expectations
I had – I have – Great Expectations for this job.
But there’s one aspect of the job that I did not expect, and it stems from the very governance structure of the ICO.
My job combines the role of Commissioner, which has a variety of regulatory and quasi-judicial functions, with that of a CEO. It is based on the “Corporation Sole” Model.
That’s highly unusual for a large regulatory body like the ICO.
The implications of this model are that I perform a wide range of management functions in my capacity as the ICO’s CEO.
I would say that as well as my regulatory role, I must also work alongside my excellent staff on administrative duties involving organisation, finance, human resources, and negotiations with the unions.
Much activity of late has been about recruitment and retention issues. I am pleased to report that the Treasury has given me pay flexibility to address the gap in wages when compared to the external market.
Everyone is looking for data protection expertise.
I am also looking at new ways to bring in talent – through secondments from the private and government sectors, and through technology fellowships for post-doctoral experts.Toolbox
It’s not just about the money, it’s also about the resources. And I have many tools in the toolbox. 20 years ago, the toolbox was not global.
Now there is a common recognition that all DPAs need to make creative use of all the tools in the toolbox.
And as in a toolbox, each tool (the hammer, the drill, the screwdriver, the chisel) is suited to a different purpose.
But most of these tools can be used separately, and not in conjunction with another. Throw away the screwdriver or the drill, and the hammer still remains and is still capable of driving in the nail.
At the same time, it cannot drill the hole, or screw in the screw – assuming, of course, that the user can tell the difference.
For the person with the hammer, everything can tend to look like a nail, right?
The tools in the privacy toolbox, however, are designed to be used in conjunction with one another. They do form an integrated package, all of which are now necessary and none sufficient on their own.
Of course the tools are all for nothing if the Commissioner and her team don’t have a good plan for what we are building and why.The Law
So now to the law. This global repertoire of instruments is reflected in the General Data Protection Regulation (GDPR), that will apply in the UK from May: privacy by default and design; codes of practice; privacy seals; Data Protection Impact Assessments (DPIA); data protection officers; accountability mechanisms for good privacy management.
The Europeans have made vigorous efforts to learn from abroad and to embrace policy instruments that were pioneered in other countries, such as Canada, and to incorporate them into the GDPR.
Positive results in data protection are not just attributable to decisions from the top.
They are “co-produced” by a widespread network of actors (regulators, businesses, consumer organisations, media, researchers, and individuals).
I see the ICO as the facilitator of this network, a convener as much as the regulator.My varied roles
Over ten years ago, Charles Raab and Colin Bennett published The Governance of Privacy: Policy Instruments in Global Perspective1.
In that book, they defined the contemporary roles of the DPA as: ombudspersons, auditors, consultants, educators, policy advisers, negotiators, enforcers, and international ambassadors.
Different authorities played these roles in different ways and with shifting emphasis over time. I, and my staff, also play these roles.Data Protection “Ombudsman”
Any DPA has to be attentive to its main clients – the citizenry who may have concerns and questions about how their personal data is captured and processed.
We all play the classic role of the “ombudsperson.”
Demand for this role is high and increasing. In 2016-17, the ICO received and dealt with over 18000 data protection complaints, 90% of which were resolved within three months of receipt.
This year we will be over the 21,000 mark and next year we expect over 24,000 complaints as people become more aware of their rights.
Prominent concerns include complaints about timely and comprehensive access to personal information, about the use of CCTV, and take-down requests from search engines. We are dealing with a wide range of complaints, most relate to general business, including the financial and insurance sectors, but they also cover the important relationship and services between the state and the citizen, including local and central government, health, policing and education.Auditor
The auditing role is central, and will become more so under GDPR. That embraces more proactive assessments of organisational accountability and expands our work to the private sector in a way not seen before. But we now also have a more nuanced understanding of what a data protection audit actually entails, and make important distinctions between full-blown audits, risk reviews and advisory audits.
In 2017-18, we delivered 24 full audits providing advice and recommendations, 37 information risk reviews, 18 follow-up audits, and 47 advisory audits to SMEs.Consultant
We are also consultants and often give advice to organisations that come to us with requests to comment on new products and services.
We are happy to hear of new developments and to give advice about whether new systems are compliant with the law, and about how to minimize risks to privacy. This role too will increase under the GDPR – organisations will be increasingly pressured to get the advice of regulators before systems are developed and services are launched.
They will be expected to implement privacy by design, and by default, and will need advice about how to accomplish those goals.
In this regard, my office is establishing a regulatory “sandbox” that provides beta testing of new initiatives in private and public sectors.
This strategy allows us to keep up with new technological developments, and at the same time ensure that appropriate protections and safeguards are built-in.
This is what the law requires.
The strategy is based on the strong belief that privacy and innovation are not mutually exclusive. New technology is both a risk and an opportunity. The strategy also allows us to boost the technical expertise of our staff.Educator
I spend a lot of my time in education – both of the general public and of organisations. We have launched a guide to the GDPR, which has had over 3 million hits since publication.
I have given several dozen speeches to organisations over the last two years, and use those as an opportunity to spread the word to key audiences. We are also active in social media, and broadcast podcasts on significant questions. I also write blogs on key issues – including a series of GDPR myth busting blogs.
In April, we will launch a public education campaign, Your Data Matters, to educate the public on their new rights under the law.
The campaign is the ICO’s but we are collaborating with private sector and civil society partners to assist us in disseminating information about the use of personal data in everyday life, complete with real-life scenarios and story-telling content. The aim is to increase the public’s trust and confidence in how organisations use their data. And that’s my priority.Policy Adviser
With the GDPR, and Brexit, I have spent a lot of time with parliamentary committees, ministerial staff in giving policy advice about legislative and regulatory change. I spend around 2-3 days a week in London since I took up the position because of heavy parliamentary and Whitehall business. We have opened a London office and formed a parliamentary team.Negotiator
My staff and I need strong negotiation skills staking out principled positions, but being prepared to compromise. We negotiate with government agencies, and with corporations. We negotiate, for instance, over codes of practice, such as the one currently being developed on direct marketing.. The role of negotiator is critical in an area of law where there are often no clear black and white answers, and few “bright-line” rules.
We are also involved in negotiation with other regulators and oversight agencies. There are many other players in this space – from the NCSC in matters of cyber security, the Surveillance Camera Commissioner to the Childrens’ Commissioners. In fact I met with Bruce Adamson the Children and Young People’s Commissioner Scotland just this morning.
We work hard to develop a framework that allows us to work in a co-ordinated manner in the best interest of UK citizens.
I played all those roles in Canada (in Ottawa and in British Columbia). But they are now played out on a bigger stage, and with far greater implications.
There are two roles I’ve yet to speak about – the enforcer and the international ambassador.
These are far more prominent in my role as UK Information Commissioner than they ever were in Canada. And these are the ones that I would like to discuss in greater detail in the rest of this talk.Enforcement
My office possesses a greater range of enforcement and sanctioning powers than those in Canada.
As an illustration, companies could find themselves subject to severe penalties for not complying with the GDPR, which states the maximum amounts that companies could be liable to as £17m, or 4% of the organisation’s total annual worldwide turnover in the preceding year, whichever is higher.
We also have powers to suspend or amend processing or transfers.
The enforcement notice can be more intrusive than the fine. These are significant fining and directing powers, and they have to be to be used predictably, consistently and judiciously.
To that end, my office is developing a Regulatory Action Policy to provide greater clarity and focus to our roles.
So, when I look at the contemporary inventory of regulatory tools at my disposal, it is now a long list that operates on a sliding continuum, or hierarchy of regulatory action.
That’s quite a list, right?
We aspire to select the most appropriate regulatory instrument based on a risk assessment of the nature and seriousness of the breach, the sensitivity of the data, the number of individuals affected, the novelty and endurance of the concerns, the larger public interest, and whether other regulatory authorities are already taking action in respect of the matter.
We also reserve the right to take into account the attitude and conduct of the organisation, whether relevant advice has been heeded, and whether accountability measures have been taken to mitigate risk.
Now might be a good time to tell you about our ongoing investigation into the use of personal data by political parties and campaigns. The use of data analytics for political purposes has not been examined by any other DPA.
It is a complex investigation involving over 30 organisations including political parties, data analytics companies, and social media platforms.
We hope to shed light on the mysteries and complexities of the data driven campaign and election. And we hope that our work will be an important contribution to the wider legal and ethical discussions about the use of personal data to mobilize voters.International
All privacy and data protection commissioners are increasingly international ambassadors for their domestic data protection regimes and approaches.
We advance the interests of our citizens, and also to some extent our businesses, in a variety of regional and international forums.
As UK Information Commissioner, I am now of course on a far more visible international stage then I ever was in Canada.
To help navigate these uncertain international waters, my office has published an international strategy that recognizes the importance of agility in an ever changing world.
As you know, the GDPR will apply in the UK as of May 25th 2018. We have been giving guidance to British businesses on how to comply with the GDPR, on issues such as automated decision-making, profiling, personal data breach notification, and the processing of data on children.
We have also tried to explode some of the unfortunate myths concerning compliance.
As we have a more longstanding experience with some of the instruments in the GDPR, we hope that our practical guidance can have an influence beyond the UK.
At the same time, we have been trying to influence the new Data Protection Bill, which had its Second Reading debate in the Commons last week, and which aims to align UK law with the GDPR.
Overall, I am encouraged that the interests of the government, UK industry and civil society are broadly aligned around the need to apply the provisions of the GDPR within the UK with minimum divergence. The government has prioritised the issue of data protection and data flows in the Brexit negotiations because data underpins the digital economy, trade and criminal justice.
I am striving for what I have called a “holy trinity of outcomes”: uninterrupted data flows to Europe and the rest of the world; high standards of data protection for UK citizens and consumers, wherever their data resides; and legal certainty for business.Brexit
We intend to play a full role in EU institutions until the UK leaves the EU, we are preparing for the post-Brexit environment in order to ensure that the information rights of UK citizens are not adversely affected.
But several questions remain, and which will be inescapably determined by the final contours of the relationship between the UK and the European trading bloc. There is agreement that there will be a transition period – necessary to untangle a 40-year regulatory regime. During the transition period, to avoid a cliff edge harmful for business and citizens, the intent is that the regulatory regimes – from data protection to aviation, food standards and the environment will be maintained.
When it comes to the arrangements post-Brexit for international transfers, achieving a bespoke agreement on data flows in the commercial and security sectors, or an adequacy finding from the European Commission may be the most elegant ways of ensuring the continued frictionless flow of data between the EU and the UK.
And there is no doubt that having domestic laws that achieve a high standard of data protection, harmonized with those of the EU, will be a significant advantage in a special arrangement.
Should the UK leave the EU without a data deal in place, EU organisations will need to have binding contractual arrangements in place every time they wish to share new information and data with their UK partners.
Even with the GDPR translated into UK law, interpretation of the law is the responsibility of the ICO, and the UK courts.
Our interpretation might be influenced by decisions made through consistency mechanisms within the GDPR and the European Data Protection Board, but there is no guarantee – leading to possible divergences of interpretation and confusion for companies that do business in the UK and the EU.
Perhaps the most significant “unknown” from my point of view is the exact nature of relationship with our DPA colleagues across Europe.
Is the ICO going to have a seat on the European Data Protection Board with voting rights or will we be an observer without voting rights; or not even allowed to have a seat around the table? Is the UK going to be a partner, helping to set policy, or will we have the status of a third country – like Canada or Japan?
And then there is the “onward transfer” problem of how to protect the data of EU citizens exported from UK organisations to other areas of the world, and which will be a critical issue in the determination of adequacy. Will the UK have a mirror agreement, similar to that enjoyed currently by Switzerland? Or will UK businesses have to default to various accountability mechanisms, such as binding corporate rules.
And what, then, of data flows from the UK to the United States? Will there be a separate UK-US Privacy Shield arrangement?
There is uncertainty over the legal arrangements in the transition period and the repercussions of this unprecedented process, but the one certainty is that the European Union will continue to advance the highest standards of protection for the personal data of people in the EU, and the UK shares and has committed to maintain these high standards.
I expect that when it comes to rights such as the right to privacy and data protection, the EU and the UK will continue to pursue common strategies; and I expect to maintain substantial dialogue and work with my EU colleagues. The ICO is the largest DPA in Europe and contributes heavily to the work of the Article 29 Working party. Its influence should, and will, continue to be felt post-Brexit.Courage
But none of those resources, legal tools and relationships are sufficient, unless the Commissioner has the courage of leadership and inspires teamwork to advance the rights of UK citizens in the face of some strong global, technological and organisational pressures. But courage is not just manifested in enforcement – in using the legal powers of the office to punish and sanction.
It is also a matter of hard work, commitment, perseverance and a skill in knowing what instrument to use, at what time.
Any data protection or privacy Commissioner has to be pragmatic, and be aware of the various policy tools and instruments at his or her disposal. At a superficial level, the job does involve knowing when to use the ‘carrot’ or ‘the stick’. But those choices are now more complex.
So that simple distinction may be misleading – there are now many types of ‘carrot’ and many types of ‘stick’.
At the end of the day, all privacy and data protection commissioners are looking for an ounce of prevention.
That has been generally argued by observers of the work of privacy commissioners, going back to David Flaherty’s 1989 pioneering book, Protecting Privacy in Surveillance Societies2.
Offices like mine, like the ICO are more effective when they can act proactively, and can give general policy guidance to minimize the needs for complaints, and for enforcement actions.
Prevention is better than cure.
But this is a goal that is not easy to realise, when the office is continually expected to respond to the unexpected: the data breach, the high-profile media story, the sudden policy initiative from government, the significant court decision and so on.
We do try to operate an intelligence function that gathers data on the implementation of data protection, surveys companies and monitors practices.
We have a new team that focuses on priority files, and these cases, investigations or audits are run by cross office groups directed by the senior leadership team. We are then able to understand any general patterns and take proactive measures accordingly.
We also work with civil society and consumer groups – and take their complaints about systemic issues.
GDPR will give us more tools for education, for encouraging accountability, for building in privacy by design and by default. Of course, it is essential to keep the legal sanctions in the background, be ready to use them, and make organisations aware that we are ready to use them.
That general conclusion about the importance of the proactive and general policy work, over the more reactive enforcement work, was also true of my work in Canada and BC.
It is just that I now have more money, more staff, more laws, more tools in my toolbox, a larger audience, a brighter media spotlight and a more extensive range of organisations to regulate.
So, I have the resources to do the job and the law to back me up.
I’ll let you be the judge as to whether I and my team have the courage!References
1 C.J Bennett and C.D Raab, The Governance of Privacy: Policy Instruments in Global Perspective (Cambridge MA: MIT Press, 2006).
2 D.H Flaherty, Protecting Privacy in Surveillance Societies: The Federal Republic of Germany, Sweden, France, Canada and the United States (Chapel Hill: University of North Carolina Press, 1989).
Event: Joining The Circle: capturing the zeitgeist of ‘Big Tech’ companies, social media speech and privacy
Event: Joining The Circle: capturing the zeitgeist of ‘Big Tech’ companies, social media speech and privacy
Professor Robin Barnes (Global Institute of Freedom and Awareness) and Peter Coe (Aston University) have organised a panel session at the Inner Temple, London, on Wednesday 23 May 2018. The session is entitled: ‘Joining The Circle: capturing the zeitgeist of ‘Big Tech’ companies, social media speech and privacy’.
It is based on Dave Egger’s book, The Circle, which tells the story of an all-powerful new media company that seeks to totally monopolize its market place and remake the world in its image. Although fictional, the book captures the zeitgeist surrounding ‘Big Tech’ companies exerting ever-increasing influence over our lives by altering our perceptions and expectations of the media (including citizen journalists), free speech and privacy, and how our personal information is used and protected.
The panel consists of seven experts from academia and practice who are currently engaging with these issues, including: Peter Coe, Professor Barnes; Dr Paul Wragg, Rebecca Moosavian (both University of Leeds), Dr Paul Bernal (University of East Anglia), Dr Laura Scaife (BBC) and Jacob Rowbottom (University of Oxford). Professor Ian Cram (University of Leeds) will open the conference. These experts will present and discuss their thoughts on these issues, and their potential implications, both now and in the future.
This event will be of interest to practising and academic lawyers with expertise in Media Law generally (including free speech, privacy and data protection), journalists and other media professionals, and those engaged in research and teaching relating to journalism and the wider media.
The event is free to attend, and includes lunch and refreshments. Registration will begin at 9.00am and the panel will finish at around 4.00pm. Following the close of the panel session there will be an opportunity to network with the panel members and other delegates.
Delegate places are limited. Therefore, places are available on a first come first served basis. If you would like to attend, please email Peter Coe as soon as possible: firstname.lastname@example.org.
Guest post by Dr Judith Townend.
This is an edited version of an article which first appeared in Communications Law journal, volume 23, issue 1 (Bloomsbury Professional) and PA Media Lawyer.
In 2012 Mr Justice Tugendhat, ahead of his retirement in 2014, made a plea for more media specialist barristers and solicitors to consider a judicial role: “As the recruiting posters put it: Your country needs you.”
He emphasised the particular burden of freedom of expression cases, which require judges, for example, to consider the rights of third parties, “even if those third parties choose not to attend court” and to provide reasons for the granting of injunctions at very short notice.
Without expert knowledge of the applicable law, this is no easy task. Fortunately, media law cases have not fallen apart with the respective retirements of Sir Michael Tugendhat and Sir David Eady, and recent specialists to join the High Court include Mr Justice Warby in 2014, and Mr Justice Nicklin in 2017 – both formerly of 5RB chambers.
The arrival of Mr Justice Warby, who was given the newly created role of Judge in charge of the Media and Communications List, has provided a welcome opportunity to propose changes to the procedure of media litigation in the Queen’s Bench Division, where the majority of English defamation and privacy claims are heard.
Since taking on responsibility for the cases involving one or more of the main media torts – including defamation, misuse of private information and breach of duty under the Data Protection 1998 – Mr Justice Warby has spoken about his hopes and plans for the list, and also conducted a consultation among those who litigate in the area, as well as other interested parties.
The consultation considered the adequacy of Civil Procedure Rules and Practice Directions; the adequacy of the regime for monitoring statistics on privacy injunctions; and support for the creation of a new committee.
As a socio-legal researcher rather than legal practitioner, my interest was piqued by the latter two questions.
For some time, I have been concerned that efforts by the Judiciary and the Ministry of Justice to collect and publish anonymised privacy injunction data have been insufficient, and also that the availability of information about media cases could be improved more generally.
My own efforts to access case files and records in 2011-13, to update research conducted by Eric Barendt and others in the mid 1990s, and to interrogate assertions of defamation’s “chilling effect”, proved largely unsuccessful and I was astonished how rudimentary and paper-based internal systems at the Royal Courts of Justice appeared to be.
Although public observers are entitled to access certain documents – such as claim forms – the cost and difficulty in locating claim numbers prohibits any kind of useful bulk research which would allow more sophisticated qualitative and quantitative analysis of media litigation.
I jumped, therefore, at the opportunity of the consultation to raise my concerns about the injunctions data, and to support the creation of a new user group committee.
My submission, with Paul Magrath and Julie Doughty, on behalf of the Transparency Project charity, made suggestions for revising the injunctions data collection process, including the introduction of an audit procedure to check information was being recorded systematically and accurately.
Following the consultation, Mr Justice Warby held a large meeting at the Royal Courts of Justice for all respondents and other interested parties at which he shared a table of proposals from the consultation, provisionally ranked as “most feasible”, “more difficult” and “most difficult”.
The last category also included proposals which would require primary legislation, which would be a matter for Parliament rather than the Judiciary.
I was pleased that our initial proposals on the transparency of injunctions data have been deemed practical and feasible in the first instance.
Also considered achievable are some of the proposals related to case management and listings, updating the pre-action protocol (PAP), the Queen’s Bench Guide, and civil practice directions in light of developments in privacy, data protection and defamation litigation and press regulation (not least to reflect the Defamation Act 2013).
This meeting also established the creation of a new Media and Communications List User Group (MACLUG) to which a range of representatives have been appointed.
The group comprises members of the Bar and private practice solicitors (including both claimant and defendant specialists), in-house counsel, clerks, and a costs practitioner.
Additionally, I have joined as a representative of public interest groups – i.e. those engaged in academic research and third sector work. The new committee met for the first time at the end of 2017, and members have formed smaller working groups to take forward the “feasible” proposals, which will be discussed with our respective constituencies in due course, and where relevant, eventually proposed to the Civil Procedure Rule Committee to consider.
In a speech to the Annual Conference of the Media Law Resource Center in September last year Mr Justice Warby identified his overall aims for the “big picture” and landscape of media litigation: to resolve disputes fairly, promptly, and at reasonable cost.
All of which were “easier said than done”, in his words. Quite so. But it is right that it should be attempted, and with judicial input where appropriate.
Mr Justice Warby’s efforts to date are to be applauded, and in particular, his open approach in addressing some of the flaws and inconsistencies of current practice, and evaluating structural and systemic issues.
That said, a committee formed by the judiciary is constrained in its remit, quite rightly. The consideration of changes to primary legislation should fall to Parliament.
It is therefore important that media law practitioners and other stakeholders should also work with the Ministry of Justice and HM Courts and Tribunals Service to inform ongoing work on courts modernisation, and push for wider consultation and involvement in reforms. A further challenge is to persuade government and parliamentarians to take on any issues requiring changes to legislation.
Part I of the Leveson Inquiry addressing, in part, the relationship between media proprietors, editors and politicians showed that the process of consultation on public policy affecting the news media has been subject to undue influence by certain private interests, and insufficiently transparent.
To this end, perhaps the new Lord Chancellor and Secretary of State for Justice, David Gauke MP, and the new Secretary of State for Digital, Culture, Media and Sport, Matt Hancock MP, might consider ways in which they can consult more openly and fairly in their development of policy and draft legislation on freedom of expression, reputation and privacy.
Dr Judith Townend is lecturer in media and information law at the University of Sussex and a member of the Queen’s Bench Division Media and Communications List User Group Committee.
Featured image: courtesy of Dave Pearce (@davebass5) on Flickr.
This post is re-posted from the LSE Media Policy Project Blog.
As part of a series of interviews with LSE Faculty on themes related to the Truth, Trust and Technology Commission, Dr Omar Al-Ghazzi talks to LSE MSc student Ariel Riera on ‘echo chambers’ in the context of North Africa and the Middle East.
AR: The spread of misinformation through social media is a main focus of the Commission. Are there similar processes in the Middle East and in the North Africa region?
OA: Questions about trust, divisions within society, and authoritarian use of information or what could be called propaganda are very prevalent in the Middle East and North Africa. So in a way a lot of the issues at hand are not really new if we think about communication processes globally. Much of the attention that misinformation has been getting is in relation to Trump and Brexit. But Syria, for instance, is actually a very productive context to think through these questions, because with the uprising and the war, there was basically an information blackout where no independent journalist could go into the country. This created an environment where witnesses and citizen journalists and activists fill that gap. So it is now a cliché to say that the war in Syria is actually the most documented war. But all that information has not led to a narrative that people understand in relation to what’s happening. And that has to do with trust in digital media and the kind of narratives that the government disseminates. The echo chamber effect in the way people access information from online sources they agree with is also as prevalent in the Middle East as it is globally.
AR: And in these countries, who are the perpetrators of fake news and misinformation and what are the channels?
OA: It is a complicated question because if we talk about the war in Syria, the communication environment is much more complex than the binary division between fake and real. For instance, I am interested in the reporting on the ground in areas that are seeing or witnessing war and conflict. I will give you an example. Now in the suburbs of Damascus, where there is a battle between rebels and the government, there are several cases of children and teenagers doing the reporting. So how should this be picked up by news organisations, and what are the consequences? CNN recently called one of the teenagers based in Eastern Ghouta, Muhammed Najem, a ‘combat reporter’. What are the ethical considerations of that? Does that encourage that teenager to take for instance more risks to get to that footage? How is what he produces objective if first he has obviously no journalism training as a very young person and second he is in a very violent context where his obvious interest lies in his own survival and in getting attention about his and his community’s suffering. He has a voice that he wants to be heard and which should be heard. But why is the expectation, if he is dubbed a ‘combat report’, that what he produces should be objective news reporting?
Beyond this example of the complex picture in war reporting, I think the Middle East region also teaches us that when there is a lack of trust in institutions of any country in the world, when there is division in society about a national sense of belonging, about what it means to be a patriot or a traitor, that would produce mistrust in the media. Basically, a fractured political environment engenders lack of trust in media, and engenders that debate around fake or real. So there is a layer beyond the fakeness and realness that’s really about social cohesion and political identity.
AR: Nationalist politicians all over the world have found in social media a way to bypass mainstream media and appeal directly to voters. What techniques do they use to do this?
OA: Perhaps in the Middle East you don’t find an example of a stream of consciousness relayed live on Twitter like the case is with President Trump, but, like elsewhere in the world, politicians are on Twitter and even foreign policy is often communicated there. Also, a lot of narratives that feed into conflicts, like the Arab-Israeli conflict, take shape on social media. So without looking at social media you certainly don’t get the full picture even of the geopolitics in the region. Without social media, one would not grasp how government positions get internalised by people and how people contribute- whether by feeding into government policies, or maybe resisting them as well.
AR: Based on your observations in North Africa and the Middle East, can mistrust or even distrust of mainstream media outlets be a healthy instinct? For example, if mainstream media is a place where only one voice is heard.
OA: Even though a lot of the media are politicised in the Arab world because they are government owned, people have access to media other than their own governments because of a common regional cultural affiliation, a shared language and the nature of the regional media environment. So actually people in the Arab world are sophisticated media users because they have access to a wide array of media outlets. Of course, there are outlets that are controlled by governments wherever one may be situated and things vary between different countries, but audiences can access pan-Arab news media such as Al Jazeera, Al Arabiya and Al Mayadeen. They have access to a wide array of online news platforms as well as broadcast news. So you really have a lot of choices. If you are a very informed audience member you would watch one news outlet to know, let’s say, what the Iranian position on a certain event is, and then you watch a Saudi funded channel to see the Saudi. But of course, most people don’t do that because you know they just access the media that offers the perspective they already agree with.
We have to remember that in the context of the Middle East there is a lot of different conflicts, there is war, which obviously heightens the emotions of people and their allegiances and whatever their worldview is. So we are also talking about the context that, because of what is happening on the ground, people feel strongly about their political positioning which feeds into the echo chamber effect.
AR: You wrote that, at least linked to the Arab Spring, there was a ‘diversity of acts referred to as citizen journalism’. What differentiates these practices from the journalism within established media?
OA: Basically, in relation to the 2011 Arab uprisings, there were a lot of academic and journalistic approaches that talked about how these uprisings were Facebook or Twitter revolutions, or only theorising digital media practices through the lens of citizen journalism. But I argued that we cannot privilege one lens to look at what digital media does on the political level because a lot of people use digital media, from terrorist organisations to activists on the ground to government agents. So one cannot privilege a particular use of digital media and focus on that and make claims about digital media generally, when actually the picture is much more complicated and needs to be sorted out more.
Of course the proliferation of smartphones and social media offered ordinary people the opportunity to have their own output, to produce witness videos or write opinions. It is a very different media ecology because of that. However, we cannot take for granted how social media is used by different actors. In social science we have to think about issues of class, literacy, the urban rural divide, the political system, the media system. And, within that complexity, locate particular practices of social media rather than make blanket statements about social media doing something to politics generally and universally.
Dr Omar Al-Ghazzi is Assistant Professor in the Department of Media and Communications at LSE. He completed his PhD at the Annenberg School for Communication, the University of Pennsylvania, and holds MAs in Communication from the University of Pennsylvania and American University and a BA in Communication Arts from the Lebanese American University.
This post brings us some recent developments on freedom expression from Dr David Goldberg, Senior Visiting Fellow, Institute of Computer and Communications Law in the Centre for Commercial Law Studies, Queen Mary, University of London, and member of the Information Law and Policy Centre’s Advisory Board.
Dr Goldberg has recently co-organised a symposium at the Southwestern Law School, Los Angeles, on “Fake News and Weaponized Defamation”. The event took place on the 26th January 2018. Further information on the event can be found at: https://www.swlaw.edu/curriculum/honors-programs/law-review-journals/journal-international-media-entertainment-law/global. Photos from the event are available at https://flic.kr/s/aHsmfxk8dL.
Dr Goldberg delivered a presentation at the event calling for enhancing media literacy, and cautioning against over-relying on the law to deal with the so-called phenomenon of fake news. Dr Goldberg’s presentation will be available in a forthcoming publication.
In addition, Dr Goldberg has recently published a chapter entitled ‘Dronalism, Newsgathering Protection and Day-to-day Norms’ in Responsible Drone Journalism (2018) edited by Astrid Gynnild and Turo Uskali. The book is available at https://www.crcpress.com/Responsible-Drone-Journalism/Gynnild-Uskali/p/book/9781138059351.
Lastly, following up on the ‘Freedom of Information at 250’ event held at the Free Word Centre in December 2016 with the support of the Information Law and Policy Centre at the Institute of Advanced Legal Studies, and the Embassies of Sweden and Finland, the publication Press Freedom 250 Years: Freedom of the Press and Public Access to Official Documents in Sweden and Finland – A Living Heritage from 1766 is now available in English. The publication of this translation has been in large part due to the efforts of Dr David Goldberg, Mark Weiler and Staffan Dalhoff. The book was launched on 2nd December 2016 at the Swedish Parliament, and the free PDF is available at http://www.riksdagen.se/globalassets/15.-bestall-och-ladda-ned/andra-sprak/tf-250-ar-eng-2018.pdf.
To order the book for libraries, contact:
Riksdag Printing Office, SE 100 12 Stockholm
ILPC Annual Conference and Annual Lecture 2017 Children and Digital Rights: Regulating Freedoms and Safeguards
ILPC Annual Conference and Annual Lecture 2017
Children and Digital Rights: Regulating Freedoms and Safeguards
The Internet provides children with more freedom to communicate, learn, create, share, and engage with society than ever before. Research by Ofcom in 2016 found that 72 percent of young teenagers in the UK have social media accounts. Twenty percent of the same group have made their own digital music and 30 percent have used the Internet for civic engagement by signing online petitions or by sharing and talking about the news.
Interacting within this connected digital world, however, also presents a number of challenges to ensuring the adequate protection of a child’s rights to privacy, freedom of expression, and safety, both online and offline. These risks range from children being unable to identify advertisements on search engines to being subjects of bullying or grooming or other types of abuse in online chat groups.
Children may also be targeted via social media platforms with methods (such as fake online identities or manipulated photos and images) specially designed to harm them or exploit their particular vulnerabilities and naivety.
These issues were the focus of the 2017 Annual Conference of the Information Law and Policy Centre (ILPC) based at the Institute of Advanced Legal Studies, University of London. The ILPC produces, promotes, and facilitates research about the law and policy of information and data, and the ways in which law both restricts and enables the sharing and dissemination of different types of information.
The ILPC’s Annual conference was one of a series of events celebrating
the 70th Anniversary of the founding of the Institute of Advanced Legal Studies. Other events included the ILPC’s Being Human Festival expert and interdisciplinary panel discussion on ‘Co-existing with HAL 9000: Being Human in a World with Artificial Intelligence’.
At the 2017 ILPC Annual Conference, leading policymakers, practitioners, regulators, key representatives from industry and civil society, and academic experts examined and debated the opportunities and challenges posed by current and future legal frameworks and the policies being used and developed to safeguard these freedoms and rights.
These leading stakeholders included Rachel Bishop, Deputy Director of Internet Policy at the Department of Digital (DCMS); Lisa Atkinson, the Information Commissioner’s Office (ICO) Head of Policy; Anna Morgan, Deputy Data Protection Commissioner of Ireland; Graham Smith, Internet law expert at Bird & Bird LLP), Renate Samson, former CEO of privacy advocacy organisation Big Brother Watch, and Simon Milner, Facebook’s Policy Director for the UK, Africa, and Middle East.
The legal systems under scrutiny included the UN Convention on the Rights of the Child and the related provisions of the UK Digital Charter, and the UK Data Protection Bill, which will implement the major reforms of the much anticipated EU General Data Protection Regulation (2016/679) (GDPR) which will soon enter into force on 25 May 2018. Key concerns expressed at the conference by delegates included the effectiveness in practice and lack of evidence-based policy for the controversial age of consent for children and their use of online information services provided for under the GDPR.
Further questions were raised with respect to what impact in practice will there be for children’s privacy, their freedom of expression, and their civil liberties as a result of the new transparency and accountability principles and mechanisms that must be implemented by industry and governments when their data processing involves the online marketing to, or monitoring, of children.
Given the importance and pertinence of these challenging and cutting-edge policy issues, the Centre is delighted that several papers, by regulators and academic experts from institutions within the UK, the EU, and beyond, which were presented, discussed, and debated at the conference’s plenary sessions and keynote panels, feature in a special issue of the leading peer-review legal journal of Communications Law, published by Bloomsbury Publishers.
This special issue also includes the Centre’s 2017 Annual Lecture delivered by one of the country’s leading children’s online rights campaigners, Baroness Beeban Kidron OBE, also a member of the House of Lords and film-maker, on ‘Are Children more than Clickbait in the 21st Century?’
For IALS podcasts of the 2017 ILPC Annual Lecture delivered by Baroness Kidron and presentations from the Annual Conference’s Keynote Panel, please see the IALS website at: http://ials.sas.ac.uk/digital/videos.
Nora Ni Loideain
Director and Lecturer in Law,
Information Law and Policy Centre,
IALS, University of London.
5th Winchester Conference on Trust, Risk, Information and the Law Wednesday 25 April 2018, Winchester, UK
5th Winchester Conference on Trust, Risk, Information and the Law Wednesday 25 April 2018, Holiday Inn, Winchester, UK
Theme: Public Law, Politics and the Constitution: A new battleground between the Law and Technology?
Keynote speakers will be Michael Barton, Chief Constable of Durham Constabulary who has spoken recently about the need to reclaim ‘sovereignty’ over the Internet, and Jamie Bartlett, Director of the Centre for the Analysis of Social Media for Demos in conjunction with the University of Sussex, and author of several books including ‘Radicals’ and ‘The Dark Net’. Breakout sessions will explore fake news, the use of algorithms in the public sector, infringements over the Internet and other issues. The conference will include the launch of the University of Winchester’s new Centre for Parliament and Public Law, with a presentation highlighting the ongoing work of the Department of Culture, Media & Sport in the area of Data Ethics & Innovation.
For the full conference programme, please visit https://www.winchester.ac.uk/news-and-events/events/event-items/the-5th-winchester-conference-on-trust-risk-information-and-the-law-trilcon18.php
The decision to set up a new National Security Communications Unit to counter the growth of “fake news” is not the first time the UK government has devoted resources to exploit the defensive and offensive capabilities of information. A similar thing was tried in the Cold War era, with mixed results.
The planned unit has emerged as part of a wider review of defence capabilities. It will reportedly be dedicated to “combating disinformation by state actors and others” and was agreed at a meeting of the National Security Council (NSC).
As a spokesperson for UK prime minister Theresa May told journalists:
We are living in an era of fake news and competing narratives. The government will respond with more and better use of national security communications to tackle these interconnected, complex challenges.
Parliament’s Digital, Culture, Media and Sport Committee is currently investigating the use of fake news – the spreading of stories of “uncertain provenance or accuracy” – through social media and other channels. The investigation is taking place amid claims that Russia used hundreds of fake accounts to tweet about Brexit. The head of the army, General Sir Nick Carter, recently told the think-tank RUSI that Britain should be prepared to fight an increasingly assertive Russia.
Details of the new anti-fake news unit are vague, but may mark a return to Britain’s Cold War past and the work of the Foreign Office’s Information Research Department (IRD), which was set up in 1948 to counter Soviet propaganda. The unit was the brainchild of Christopher Mayhew, Labour MP and under-secretary in the Foreign Office, and grew to become one of largest Foreign Office departments before its disbandment in 1977 – a story revealed in The Guardian in January 1978 by its investigative reporter David Leigh.
This secretive government body worked with politicians, journalists and foreign governments to counter Soviet lies, through unattributable “grey” propaganda and confidential briefings on “Communist themes”. IRD eventually expanded from this narrow anti-Soviet remit to protect British interests where they were likely “to be the object of hostile threats”.
By 1949, IRD had a staff of just 52, all based in central London. By 1965 it employed 390 staff, including 48 overseas, with a budget of over £1m mostly paid from the “secret vote” used to fund the UK intelligence community. IRD also worked alongside the Secret Intelligence Service (SIS or MI6) and the BBC’s World Service.Playing hardball with soft power
Examples of IRD’s early work include reports on Soviet gulags and the promotion of anti-communist literature. George Orwell’s work was actively promoted by the unit. Shortly before his death in 1950, Orwell even gave it a list of left-wing writers and journalists “who should not be trusted” to spread IRD’s message. During that decade, the department even moved into British domestic politics by setting up a “home desk” to counter communism in industry.
IRD also played an important role in undermining Indonesia’s President Sukarno in the 1960s, as well as supporting western NGOs – especially the Thomson and Ford Foundations. In 1996, former IRD official Norman Reddaway provided more information on IRD’s “long-term” campaigns (contained in private papers). These included “English by TV” broadcast to the Gulf, Sudan, Ethiopia and China, with other IRD-backed BBC initiatives – “Follow Me” and “Follow Me to Science” – which had an estimated audience of 100m in China.
IRD was even involved in supporting Britain’s entry to the European Economic Community, promoting the UK’s interests in Europe and backing politicians on both sides. It would shape the debate by writing a letter or article a day in the quality press. The department was also involved in more controversial campaigns, spreading anti-IRA propaganda during The Troubles in Northern Ireland, supporting Britain’s control of Gibraltar and countering the “Black Power” movement in the Caribbean.Overthrown: President Sukarno of Indonesia. Going too far
IRD’s activities were steadily getting out of hand, yet an internal 1971 review found the department was still needed, given “the primary threat to British and Western interests worldwide remains that from Soviet Communism” and the “violent revolutionaries of the ‘New Left’”. IRD was a “flexible auxiliary, specialising in influencing opinion”, yet its days were numbered. By 1972 the organisation had just over 100 staff and faced significant budget cuts, despite attempts at reform.
IRD was eventually killed off thanks to opposition from Foreign Office mandarins and the then Labour foreign secretary, David Owen – though that may not be the end of the story. Officials soon set up the Overseas Information Department – likely a play on IRD’s name – tasked with making “attributable and non-attributable” written guidance for journalists and politicians, though its overall role is unclear. Information work was also carried out by “alongsiders” such as the former IRD official Brian Crozier.
The history of IRD’s work is important to future debates on government strategy in countering “fake news”. The unit’s effectiveness is certainly open to debate. In many cases, IRD’s work reinforced the anti-Soviet views of some, while doing little, if anything, to influence general opinion.
In 1976, one Foreign Office official even admitted that IRD’s work could do “more harm than good to institutionalise our opposition” and was “very expensive in manpower and is practically impossible to evaluate in cost effectiveness” – a point worth considering today.
IRD’s rapid expansion from anti-communist unit to protecting Britain’s interests across the globe also shows that it’s hard to manage information campaigns. What may start out as a unit to counter “fake news” could easily spiral out of control, especially given the rapidly expanding online battlefield.
Government penny pinching on defence – a key issue in current debates – could also fail to match the resources at the disposal of the Russian state. In short, the lessons of IRD show that information work is not a quick fix. The British government could learn a lot by visiting the past.
The Information Law and Policy Centre held its third annual conference on 17th November 2017. The workshop’s theme was: ‘Children and Digital Rights: Regulating Freedoms and Safeguards’.
The workshop brought together regulators, practitioners, civil society, and leading academic experts who addressed and examined the key legal frameworks and policies being used and developed to safeguard children’s digital freedoms and rights. These legislative and policy regimes include the UN Convention on the Rights of the Child, and the related provisions (such as consent, transparency, and profiling) under the UK Digital Charter, and the Data Protection Bill which will implement the EU General Data Protection Regulation.
The following resources are available online:
This event will focus on the implications posed by the increasingly significant role of artificial intelligence (AI) in society and the possible ways in which humans will co-exist with AI in future, particularly the impact that this interaction will have on our liberty, privacy, and agency. Will the benefits of AI only be achieved at the expense of these human rights and values? Do current laws, ethics, or technologies offer any guidance with respect to how we should navigate this future society?Organisations: Institute of Advanced Legal Studies Event date: Monday, 20 November 2017 – 5:30pm Visit the event page
05 Feb 2018, 17:30 to 05 Feb 2018, 19:30
Institute of Advanced Legal Studies
Institute of Advanced Legal Studies, 17 Russell Square, London WC1B 5DR
Speaker: Damian Clifford, KU Leuven Centre for IT and IP Law
Panel Discussants: Dr Edina Harbinja, Senior Lecturer in Law, University of Hertfordshire.
Chair: Dr Nora Ni Loideain, Director and Lecturer in Law, Information Law and Policy Centre, Institute of Advanced Legal Studies
Emotions play a key role in decision making. Technological advancements are now rendering emotions detectable in real-time. Building on the granular insights provided by big data, such technological developments allow commercial entities to move beyond the targeting of behaviour in advertisements to the personalisation of services, interfaces and the other consumer-facing interactions, based on personal preferences, biases and emotion insights gleaned from the tracking of online activity and profiling and the emergence of ‘emphathic media’.
Although emotion measurement is far from a new phenomenon, technological developments are increasing the capacity to monetise emotions. From the analysis of inter alia facial expressions, voice/sound patterns, to text and data mining, and the use of smart devices to detect emotions, such techniques are becoming mainstream.
Despite the fact there are many applications of such technologies which appear morally above reproach (i.e. at least in terms of their goals (e.g. healthcare or road safety) as opposed to the risks associated with their implementation, deployment and their potential effects), their use for advertising and marketing purposes raises clear concerns in terms of the rationality-based paradigm inherent to citizen-consumer protections and thus the autonomous decision-making capacity of individuals.
In this ILPC seminar, Visiting Scholar Damian Clifford will examine the emergence of such technologies in an online context vis-à-vis their use for commercial advertising and marketing purposes (construed broadly) and the challenges they present for EU data protection and consumer protection law. The analysis will rely on a descriptive and evaluative analysis of the relevant frameworks and aims to provide normative insights into the potential legal challenges presented by emotion commercialisation online.
Discussant: Dr Edina Harbinja is a Senior Lecturer in Law at the University of Hertfordshire. Her principal areas of research and teaching are related to the legal issues surrounding the Internet and emerging technologies. In her research, Edina explores the application of property, contract law, intellectual property and privacy online. Edina is a pioneer and a recognised expert in post-mortem privacy, i.e. privacy of the deceased individuals. Her research has a policy and multidisciplinary focus and aims to explore different options of regulation of online behaviours and phenomena. She has been a visiting scholar and invited speaker to universities and conferences in the USA, Latin America and Europe, and has undertaken consultancy for the Fundamental Rights Agency. Her research has been cited by legislators, courts and policymakers in the US and Europe as well. Find her on Twitter at @EdinaRl.
A wine reception will follow this seminar.
This event is FREE but advanced booking is required.Book now
19 Feb 2018, 17:30 to 19 Feb 2018, 19:30
Institute of Advanced Legal Studies, 17 Russell Square, London WC1B 5DR
Personal Data as an Asset: Design and Incentive Alignments in a Personal Data Economy Description of Presentation: Despite the World Economic Forum (2011) report on personal data becoming an asset class the cost of transacting on personal data is becoming increasingly high with regulatory risks, societal disapproval, legal complexity and privacy concerns. Professor Irene Ng contends that this is because personal data as an asset is currently controlled by organisations. As a co-produced asset, the person has not had the technological capability to control and process his or her own data or indeed, data in general. Hence, legal and economic structures have been created only around Organisation-controlled personal data (OPD). This presentation will argue that a person-controlled personal data (PPD), technologically, legally and economically architected such that the individual owns a personal micro-server and therefore have full rights to the data within, much like owning a PC or a smartphone, is potentially a route to reducing transaction costs and innovating in the personal data economy. I will present the design and incentive alignments of stakeholders on the HAT hub-of-all-things platform (https://hubofallthings.com).
Professor Irene Ng, University of Warwick
Professor Irene Ng is the Director of the International Institute for Product and Service Innovation and the Professor of Marketing and Service Systems at WMG, University of Warwick. She is also the Chairman of the Hub-of-all-Things (HAT) Foundation Group (http://hubofallthings.com). A market design economist, Professor Ng is an advisor to large organisations, startups and governments on design of markets, economic and business models in the digital economy. Personal website http://ireneng.com
Dr Nora Ni Loideain, Director and Lecturer in Law, Information Law & Policy Centre, IALS
Wine reception to follow.
Artificial intelligence can already predict the future. Police forces are using it to map when and where crime is likely to occur. Doctors can use it to predict when a patient is most likely to have a heart attack or stroke. Researchers are even trying to give AI imagination so it can plan for unexpected consequences.
Many decisions in our lives require a good forecast, and AI agents are almost always better at forecasting than their human counterparts. Yet for all these technological advances, we still seem to deeply lack confidence in AI predictions. Recent cases show that people don’t like relying on AI and prefer to trust human experts, even if these experts are wrong.
If we want AI to really benefit people, we need to find a way to get people to trust it. To do that, we need to understand why people are so reluctant to trust AI in the first place.Should you trust Dr. Robot?
IBM’s attempt to promote its supercomputer programme to cancer doctors (Watson for Onology) was a PR disaster. The AI promised to deliver top-quality recommendations on the treatment of 12 cancers that accounted for 80% of the world’s cases. As of today, over 14,000 patients worldwide have received advice based on its calculations.
But when doctors first interacted with Watson they found themselves in a rather difficult situation. On the one hand, if Watson provided guidance about a treatment that coincided with their own opinions, physicians did not see much value in Watson’s recommendations. The supercomputer was simply telling them what they already know, and these recommendations did not change the actual treatment. This may have given doctors some peace of mind, providing them with more confidence in their own decisions. But IBM has yet to provide evidence that Watson actually improves cancer survival rates.
On the other hand, if Watson generated a recommendation that contradicted the experts’ opinion, doctors would typically conclude that Watson wasn’t competent. And the machine wouldn’t be able to explain why its treatment was plausible because its machine learning algorithms were simply too complex to be fully understood by humans. Consequently, this has caused even more mistrust and disbelief, leading many doctors to ignore the seemingly outlandish AI recommendations and stick to their own expertise.
As a result, IBM Watson’s premier medical partner, the MD Anderson Cancer Center, recently announced it was dropping the programme. Similarly, a Danish hospital reportedly abandoned the AI programme after discovering that its cancer doctors disagreed with Watson in over two thirds of cases.
The problem with Watson for Oncology was that doctors simply didn’t trust it. Human trust is often based on our understanding of how other people think and having experience of their reliability. This helps create a psychological feeling of safety. AI, on the other hand, is still fairly new and unfamiliar to most people. It makes decisions using a complex system of analysis to identify potentially hidden patterns and weak signals from large amounts of data.
Even if it can be technically explained (and that’s not always the case), AI’s decision-making process is usually too difficult for most people to understand. And interacting with something we don’t understand can cause anxiety and make us feel like we’re losing control. Many people are also simply not familiar with many instances of AI actually working, because it often happens in the background.
Instead, they are acutely aware of instances where AI goes wrong: a Google algorithm that classifies people of colour as gorillas; a Microsoft chatbot that decides to become a white supremacist in less than a day; a Tesla car operating in autopilot mode that resulted in a fatal accident. These unfortunate examples have received a disproportionate amount of media attention, emphasising the message that we cannot rely on technology. Machine learning is not foolproof, in part because the humans who design it aren’t.A new AI divide in society?
Feelings about AI also run deep. My colleagues and I recently ran an experiment where we asked people from a range of backgrounds to watch various sci-fi films about AI and then asked them questions about automation in everyday life. We found that, regardless of whether the film they watched depicted AI in a positive or negative light, simply watching a cinematic vision of our technological future polarised the participants’ attitudes. Optimists became more extreme in their enthusiasm for AI and sceptics became even more guarded.
This suggests people use relevant evidence about AI in a biased manner to support their existing attitudes, a deep-rooted human tendency known as confirmation bias. As AI is reported and represented more and more in the media, it could contribute to a deeply divided society, split between those who benefit from AI and those who reject it. More pertinently, refusing to accept the advantages offered by AI could place a large group of people at a serious disadvantage.Three ways out of the AI trust crisis
Fortunately we already have some ideas about how to improve trust in AI. Simply having previous experience with AI can significantly improve people’s attitudes towards the technology, as we found in our study. Similar evidence also suggests the more you use other technologies such as the internet, the more you trust them.
Another solution may be to open the “black-box” of machine learning algorithms and be more transparent about how they work. Companies such as Google, Airbnb and Twitter already release transparency reports about government requests and surveillance disclosures. A similar practice for AI systems could help people have a better understanding of algorithmic decisions are made.
Research suggests involving people more in the AI decision-making process could also improve trust and allow the AI to learn from human experience. For example,one study showed people were given the freedom to slightly modify an algorithm felt more satisfied with its decisions, more likely to believe it was superior and more likely to use it in the future.
We don’t need to understand the intricate inner workings of AI systems, but if people are given at least a bit of information about and control over how they are implemented, they will be more open to accepting AI into their lives.
In this guest post, Marion Oswald offers her homage to Yes Minister and, in that tradition, smuggles in some pertinent observations on AI fears. This post first appeared on the SCL website’s Blog as part of Laurence Eastham’s Predictions 2018 series. It is also appearing in Computers & Law, December/January issue.
Humphrey, I want to do something about predictions.
Yes Humphrey, the machines are taking over.
Are they Minister?
Yes Humphrey, my advisers tell me I should be up in arms. Machines – ‘AI’ they call it – predicting what I’m going to buy, when I’m going to die, even if I’ll commit a crime.
Surely not, Minister.
Not me personally, of course, Humphrey – other people. And then there’s this scandal over Cambridge Analytica and voter profiling. Has no-one heard of the secret ballot?
Everyone knows which way you would vote, Minister.
Yes, yes, not me personally, of course, Humphrey – other people. Anyway, I want to do something about it.
Of course, Minister. Let me see – you want to ban voter and customer profiling, crime risk assessment and predictions of one’s demise, so that would mean no more targeted advertising, political campaigning, predictive policing, early parole releases, life insurance policies…
Well, let’s not be too hasty Humphrey. I didn’t say anything about banning things.
My sincere apologies Minister, I had understood you wanted to do something.
Yes, Humphrey, about the machines, the AI. People don’t like the idea of some faceless computer snooping into their lives and making predictions about them.
But it’s alright if a human does it.
Yes…well no…I don’t know. What do you suggest Humphrey?
As I see it Minister, you have two problems.
The people are the ones with the votes, the AI developers are the ones with the money and the important clients – insurance companies, social media giants, dare I say it, even political parties..
Yes, yes, I see. I mustn’t alienate the money. But I must be seen to be doing something Humphrey.
I have two suggestions Minister. First, everything must be ‘transparent’. Organisations using AI must say how their technology works and what data it uses. Information, information everywhere…
I like it Humphrey. Power to the people and all that. And if they’ve had the information, they can’t complain, eh. And the second thing?
A Commission, Minister, or a Committee, with eminent members, debating, assessing, scrutinising, evaluating, appraising…
And what is this Commission to do?
What will the Commission do about predictions and AI?
It will scrutinise, Minister, it will evaluate, appraise and assess, and then, in two or three years, it will report.
But what will it say Humphrey?
I cannot possibly predict what the Commission on Predictions would say, being a mere humble servant of the Crown.
But if I had to guess, I think it highly likely that it will say that context reigns supreme – there are good predictions and there are bad predictions, and there is good AI and there is bad AI.
So after three years of talking, all it will say is that ‘it depends’.
In homage to ‘Yes Minister’ by Antony Jay and Jonathan Lynn
Marion Oswald, Senior Fellow in Law, Head of the Centre for Information Rights, University of Winchester
The Fifth Interdisciplinary Winchester Conference on Trust, Risk, Information and the Law will be held on Wednesday 25 April 2018 at the Holiday Inn, Winchester UK. Our overall theme for this conference will be: Public Law, Politics and the Constitution: A new battleground between the Law and Technology? The call for papers and booking information can be found at https://journals.winchesteruniversitypress.org/index.php/jirpp/pages/view/TRIL
In this guest post, Yijun Yu, Senior Lecturer, Department of Computing and Communications, The Open University examines the world’s top websites and their routine tracking of a user’s every keystroke, mouse movement and input into a web form – even if it’s later deleted.
Hundreds of the world’s top websites routinely track a user’s every keystroke, mouse movement and input into a web form – even before it’s submitted or later abandoned, according to the results of a study from researchers at Princeton University.
And there’s a nasty side-effect: personal identifiable data, such as medical information, passwords and credit card details, could be revealed when users surf the web – without them knowing that companies are monitoring their browsing behaviour. It’s a situation that should alarm anyone who cares about their privacy.
The Princeton researchers found it was difficult to redact personally identifiable information from browsing behaviour records – even, in some instances, when users have switched on privacy settings such as Do Not Track.
The research found that third party tracking services are used by hundreds of businesses to monitor how users navigate their websites. This is proving to be increasingly challenging as more and more companies beef-up security and shift their sites over to encrypted HTTPS pages.
To work around this, session-replay scripts are deployed to monitor user interface behaviour on websites as a sequence of time-stamped events, such as keyboard and mouse movements. Each of these events record additional parameters – indicating the keystrokes (for keyboard events) and screen coordinates (for mouse movement events) – at the time of interaction. When associated with the content of a website and web address, this recorded sequence of events can be exactly replayed by another browser that triggers the functions defined by the website.
What this means is that a third person is able to see, for example, a user entering a password into an online form – which is a clear privacy breach. Websites that employ third party analytics firms to record and replay such behaviour is, they argue, in the name of “enhancing user experience”. The more they know what their users are after, the easier it is to provide them with targeted information.
While it’s not news that companies are monitoring our behaviour as we surf the web, the fact that scripts are quietly being deployed to record individual browser sessions in this way has concerned the study’s co-author, Steven Englehardt, who is a PhD candidate at Princeton.A website user replay demo in action.
“Collection of page content by third-party replay scripts may cause sensitive information, such as medical conditions, credit card details, and other personal information displayed on a page, to leak to the third-party as part of the recording,” he wrote. “This may expose users to identity theft, online scams and other unwanted behaviour. The same is true for the collection of user inputs during checkout and registration processes.”
Websites logging keystrokes has been an issue known for a while to cybersecurity experts. And Princeton’s empirical study raises valid concerns about users having little or no control over their surfing behaviour being recorded in this way.
So it’s important to help users control how their information is shared online. But there are increasing signs of usability trumping security measures that are designed to keep our data safe online.Usability vs security
Password managers are used by millions of people to help them easily keep a record of different passwords for different sites. The user of such a service only needs to memorise one key password.
Recently, a group of researchers at the University of Derby and the Open University discovered that the offline clients of password manager services were at risk of exposing the main key password when stored as plain text in memory that could be sniffed or dumped by whole system attacks.
User experience is not an excuse for tolerating security flaws.
The Information Law and Policy Centre’s Annual Conference 2017 – Children and Digital Rights: Regulating Freedoms and Safeguards
In this guest post Lorna Woods, Professor of Internet Law at the University of Essex, provides an analysis on the new ECJ opinion . This post first appeared on the blog of Steve Peers, Professor of EU, Human Rights and World Trade Law at the University of Essex.
Who is responsible for data protection law compliance on Facebook fan sites? That issue is analysed in a recent opinion of an ECJ Advocate-General, in the case of Wirtschaftsakademie (full title: Unabhängiges Landeszentrum für Datenschutz Schleswig-Holstein v Wirtschaftsakademie Schleswig-Holstein GmbH, in the presence of Facebook Ireland Ltd, Vertreter des Bundesinteresses beim Bundesverwaltungsgericht).
This case is one more in a line of cases dealing specifically with the jurisdiction of national data protection supervisory authorities, a line of reasoning which seems to operate separately from the Brussels I Recast Regulation, which concerns jurisdiction of courts over civil and commercial disputes. While this is an Advocate-General’s opinion, and therefore not binding on the Court, if followed by the Court it would consolidates the Court’s prior broad interpretation of the Data Protection Directive. While this might be the headline, it is worth considering a perhaps overlooked element of the data-economy: the role of the content provider in providing individuals whose data is harvested.
Wirtschaftsakademie set up a ‘fan page’ on Facebook. The data protection authority in Schleswig-Holstein sought the deactivation of the fan page on the basis that visitors to the fan page were not warned that their personal data would be collected by the by means of cookies placed on the visitor’s hard disk. The purpose of that data collection was twofold: to compile viewing statistics for the administrator of the fan page; and to enable Facebook to target advertisements at each visitor by tracking the visitors’ web browsing habits, otherwise known as behavioural advertising. Such activity must comply with the Data Protection Directive (DPD) (as implemented in the various Member States). While the content attracting visitors was that of Wirtshaftsakademie, it relied on Facebook for data collection and analysis. It is here that a number of preliminary questions arise:
- Who is the controller for the purposes of the data protection regime;
- Which is the applicable national law; and
- The scope of the national supervisory authority’s regulatory competence?
The referring court had assumed that Wirtschaftsakademie was not a controller as it had no influence, in law or in fact, over the manner in which the personal data was processed by Facebook, and the fact that Wirtschaftsakademie had recourse to analytical tools for its own purposes does not change this [para 28]. Advocate General Bot, however, disagreed with this assessment, arguing that Wirtschaftsakademie was a joint controller for the purposes of the DPD – a possibility for which Article 2(d) DPD makes explicit provision (paras 42, 51, 52]. The Advocate General accepted that while the system was designed by Facebook so as to facilitate a data-driven business model and Wirtschaftsakademie was principally a user of the social network [para 53]. The Advocate General highlighted that without the participation of Wirtschaftsakademie the data processing in respect of the visitors to Wirtschaftsakademie could not occur; and he could end that processing by closing the relevant fan page down. In sum:
Inasmuch as he agrees to the means and purposes of the processing of personal data, as predefined by Facebook, a fan page administrator must be regarded as having participated in the determination of those means and purposes. [para 56]
Advocate General Bot further suggested that the use of the various filters included in the analytical tools provided meant that the user had a direct impact on how data was processed by Facebook. To similar effect, a user can also seek to reach specific audiences, as defined by the user. As a result, the user has a controlling role in the acquisition phase of data processing by Facebook. The Advocate General rejected an formal analysis based on the terms of the contract concluded by the User and Facebook [para 60] and the fact that the user may be presented with ‘take it or leave it’ terms, does not affect the fact that the user may be a controller.
As a final point, the Advocate General referred to the risk of data protection rules being circumvented, arguing that:
had the Wirtschaftsakademie created a website elsewhere than on Facebook and implemented a tool similar to ‘Facebook Insights’ in order to compile viewing statistics, it would be regarded as the controller of the processing needed to compile those statistics [para 65].
A similar approach should be taken in relation to social media plug ins (such as Facebook’s like button), which allow Facebook to gather data on third party websites without the end-user’s consent (see Case C-40/17 Fashion ID, pending).
Having recognised that joint responsibility was an important factor in ensuring the protection of rights, the Advocate General – referring to the approach of the Article 29 Working Party on data protection – clarified that this did not mean that both parties would have equal responsibility, but rather their respective responsibility would vary depending on their involvement at the various stages of processing activities.
Facebook is established outside the EU, but it has a number of EU established subsidiaries: the subsidiary which has responsibility for data protection is established in Ireland, while the other subsidiaries have responsibility for the sale of advertising. This raises a number of questions: can the German supervisory authority exercise its powers and if so, against which subsidiary?
Applicable law is dealt with in Article 4 DPD, which refers to the competence of the Member State where the controller is established but which also envisages the possibility, in the case of a non-EU parent company, of multiple establishments. The issue comes down to the interpretation of the phrase from Art. 4(1)(a), ‘in the context of the activities of an establishment’, which according to Weltimmo cannot be interpreted restrictively [para 87]. The Advocate General determined that there were two criteria [para 88]:
- An establishment within the relevant Member State; and
- Processing in connection with that establishment.
Relying on Weltimmo and Verein für Konsumenteninformation the Advocate General identified factors – which are based on the general freedom of establishment approach to the question of establishment looking for real activity through stable arrangements – the approach is not formalistic. Facebook Germany clearly satisfies these tests.
Referring to Article 29 Working Party Opinion 8/2010, the Advocate General re-iterated that in relation to the second criterion, it is context not location that is important. In Google Spain, the Court of Justice linked the selling of advertising (in Spain) to the processing of data (in the US) to hold that the processing was carried out in the context of the Spanish subsidiary given the economic nexus between the processing and the advertising revenue. The business set up for Facebook here is the same, and the fact that there is an Irish office does not change the fact that the data processing takes place in the context of the German subsidiary. The DPD does not introduce a one-stop shop; to the contrary, a deliberate choice was made to allow the application of multiple national legal systems (see Rec 19 DPD), and this approach is supported by the judgment in Verein für Konsumenteninformation in relation to Amazon. The system will change with the entry into force of the General Data Protection Regulation (GDPR), but the Advocate General proposed that the Court should not pre-empt the entry into force of that legislation (due May 2018) in its interpretation, as the cooperation mechanism on which it depends is not yet in place [para 103].
By contrast to Weltimmo, where the supervisory authority was seeking to impose a fine on a company established in another Member State, here the supervisory authority would be imposing German law on a German company. There is a question, however, as to the addressee of any enforcement measure. On one interpretation, the German regulator should have the power only to direct compliance on the company established on its territory, even though that might not be effective. Alternatively, the DPD could be interpreted so as to allow the German regulator to direct compliance from Facebook Ireland. Looking at the fundamental role of controllers, Advocate General Bot suggested that this was the preferred solution. Article 28(1), (3) and (6) DPD entitle the supervisory authority of the Member State in which the establishment of the controller is located, by contrast to the position in Weltimmo, to exercise its powers of intervention without being required first to call on the supervisory authority of the Member State in which the controller is located to exercise its powers.
The novelty in this Opinion relates to the first question is significant because the business model espoused by social media companies depends on the participation of those providing content, who seem at the moment to take little responsibility for their actions. The price paid by third parties (in terms of data) is facilitated by them, allowing them to avoid or minimise their business costs. Should there be a consistency of enforcement applications against such users, this may gradually have an effect on the underlying platform’s business model. While it is harder to regulate mice than elephants, at least these mice appear to be clearly within the geographic jurisdiction of the German regulator – and will remain so even when the GDPR is in force.
The Advocate General went out of his way to explain that there was no difference between the situation in issue here and that in the other relevant pending case, Case C-40/17 Fashion ID. This case concerns the choice by a website provider to embed third party code allowing the collection of data in respect of visitors in the programming for the website for its own ends (increased visibility of and thus traffic to the website): the code in question is that underpinning the Facebook ‘like’ button, but would also presumably include similar codes from Twitter or Instagram.
If there was any doubt from cases – for example Weltimmo – about whether there is a one-stop shop (ie only one possible supervisory authority with jurisdiction across the EU) in the Data Protection Directive, the Advocate General expressly refutes this point. In this context, it seems that this case adds little new, rather elaborating points of detail based on the precise factual set-up of Facebook operations in the EU. It seems well-established now that – at least under the DPD – clever multinational corporate structures cannot funnel data protection compliance through a chosen national regime.
It may be worth noting also the broad approach of the Advocate General to Google Spain when determining whether processing is in the context of activities. There the Court observed that:
‘in such circumstances, the activities of the operator of the search engine and those of its establishment situated in the Member State concerned are inextricably linked since the activities relating to the advertising space constitute the means of rendering the search engine at issue economically profitable and that engine is, at the same time, the means enabling those activities to be performed [Google Spain, para 56]
Here, the Advocate General focussed on the fact that social networks such as Facebook generate much of their revenue from advertisements posted on the web pages set up and accessed by users and that there is therefore an indissoluble link between the two activities. Thus it seems that the Google Spain reasoning applies broadly to many free services paid for by user data, even if third parties – for example those providing the content on the page visited – are involved too.
Of course, the GDPR does introduce a one-stop shop. Arguably therefore these cases are of soon to be historic interest only. The GDPR proposes that the regulator in respect of the controller’s main EU establishment should have lead responsibility for regulation, with regulators in respect of other Member States being ‘concerned authorities’. There are two points to note: first, there is a system in place to facilitate the cooperation of the relevant supervisory authorities Art 60), including possible recourse to a ‘consistency mechanism’ (Art 63 et seq); secondly, the competence of the lead authority to act in relation to cross-border processing in Article 66 operates without prejudice to the competence of each national supervisory authority in its own territory set out in Article 55. The first of these two points concerns the attempt to limit regulatory arbitrage and a downward spiral of standards in the GDPR as applied and the broad approach to establishment. The interest of the recipient state in regulating means that there may be many cases involving ‘concerned authorities’. The precise implications of the second point are not clear; note however that it seems that the one-stop shop as regards Facebook would not stop data protection authorities taking enforcement action against users such as Wirtschaftsakademie.
In this guest post, Faith Gordon, University of Westminster explores how, under UK law, a child’s anonimity is not entirely guaranteed. Faith is speaking at the Information Law and Policy Centre’s annual conference – Children and Digital Rights: Regulating Freedoms and Safeguards this Friday, 17 November.
Under the 1948 Universal Declaration of Human Rights, each individual is presumed innocent until proven guilty. A big part of protecting this principle is guaranteeing that public opinion is not biased against someone that is about to be tried in the courts. In this situation, minors are particularly vulnerable and need all the protection that can be legally offered. So when you read stories about cases involving children, it’s often accompanied with the line that the accused cannot be named for legal reasons.
However, a loophole exists: a minor can be named before being formally charged. And as we all know in this digital age, being named comes with consequences – details or images shared of the child are permanent. While the right to be forgotten is the strongest for children within the Data Protection Bill, children and young people know that when their images and posts are screenshot they have little or no control over how they are used and who has access to them.
Should a child or young person come into conflict with the law, Section 44 of the Youth Justice and Criminal Evidence Act 1999 could offer pre-charge protection for them as minors, but it has never been enacted.
The latest consideration of this issue was during debates in the House of Lords in July 2014 and October 2014. It was decided that the aims of Section 44 could be achieved by protections from media regulatory bodies. But given that, in reality, regulatory bodies and their codes of practice don’t adequately offer protection to minors pre-charge, the government’s failure to enact this section of the law is arguably contrary to Article 8 of the European Convention on Human Rights, which is the right to respect for private and family life.Once you’re named …
This failure is now exposing a young 15-year-old child. Private details about him were published in print media, online and by other individuals on social media, after he was questioned by the Police Service of Northern Ireland in respect of the alleged TalkTalk hacking incident.
This alleged hacking has been described as one of the largest and most public contemporary incidents of cybercrime in the UK. And legal proceedings in the High Court were required to ensure that organisations, such as Google and Twitter, removed the child’s details from their platforms and to also restrain further publication of the child’s details. But despite injunctions being issued, internet searches are still revealing details about the identity of the 15-year-old.
The attempt to remedy the issue of this child’s identification online highlights the problem of dealing with online permanency. Once the horse has bolted it’s hard to get it back in.
This issue has arisen in a range of high profile cases where children and young people have been accused of involvement in crime. One example is the murder of Ann Maguire, a teacher in Leeds who was murdered in 2014.
When the incident was first reported, many of the newspapers published various details about the accused 15-year-old, including information about where he lived and his family upbringing. The Sun newspaper “outed” the 15-year-old by printing his name.
Allowing the media free rein to name a child before they are charged can later prejudice the fairness of their trial if it proceeds to court. This is what occurred in the case of Jon Venables and Robert Thompson, two ten-year-old boys who were convicted of the murder of a two-year-old. Their lawyers claimed that media reporting had undermined the chances of a fair trial and this had breached their rights. The European Court of Human Rights in its judgment in 1999 ruled that the boys did not receive a fair trial.
While the Northern Ireland judiciary states that there is protection through media regulatory guidelines, my research demonstrates that the revised IPSO Code of Practice – which came into force in January 2016 – fails to provide crucial advice to journalists on the use of social media and online content.
I have called for a clear set of enforceable guidelines for the media, which states that children’s and young people’s social media imagery and comments should not be reprinted or published without their fully informed consent and that all decision making should reflect children’s best interests.Consequences
This form of publishing details is a form of naming and shaming, which can encourage or stir up anger, resentment and retaliation in communities. In today’s media hungry world, the chase reveal as much as possible – but it is worrying especially when this naming is done before charge and uses a loophole.
Children who are already vulnerable are placed at further risk. Research I have conducted over the past ten years clearly demonstrates the significance of negative media representations on children and young people, and their manifestation in punishment attacks, beatings and exiling from their communities.
As a youth advocate who works with young people said during an interview with me in 2015: “Really in the society we live in you are guilty until proven innocent … basically people are looking at them [young people] and going ‘criminal’ … it is not right.” Several youth workers I also interviewed stated that releasing details or imagery of children “could damage their health, well-being and future job prospects” and they discussed examples of how identification in the media “led to them getting shot or a beating” in communities.
A report by the Standing Committee for Youth Justice – an alliance of organisations aiming to improve the youth justice system in England and Wales – proposed that in the digital age a legal ban on publishing children’s details at any time during their contact with the legal system is the only safeguard.
It is clear that legislators, policymakers and the media regulatory bodies need to keep up with advances in online and social media practices to ensure that children’s rights are not being breached. Addressing this loophole in the legislation is one step that is urgently required because media regulatory bodies currently lack clarity and suitable ethical guidelines on this issue.
The gap within the criminal justice legislative framework needs to urgently be addressed. Unless it is, there could be further case examples of children who may not go on to be charged but have their details published, shared, disseminated and permanently accessible via a basic internet search.
In this guest post Dr Daniel R. Thomas, University of Cambridge reviews research surrounding ethical issues in research using datasets of illicit origin. This post first appeared on “Light Blue Touchpaper” weblog written by researchers in the Security Group at the University of Cambridge Computer Laboratory.
On Friday at IMC I presented our paper “Ethical issues in research using datasets of illicit origin” by Daniel R. Thomas, Sergio Pastrana, Alice Hutchings, Richard Clayton, and Alastair R. Beresford. We conducted this research after thinking about some of these issues in the context of our previous work on UDP reflection DDoS attacks.
Data of illicit origin is data obtained by illicit means such as exploiting a vulnerability or unauthorized disclosure, in our previous work this was leaked databases from booter services. We analysed existing guidance on ethics and papers that used data of illicit origin to see what issues researchers are encouraged to discuss and what issues they did discuss. We find wide variation in current practice. We encourage researchers using data of illicit origin to include an ethics section in their paper: to explain why the work was ethical so that the research community can learn from the work. At present in many cases positive benefits as well as potential harms of research, remain entirely unidentified. Few papers record explicit Research Ethics Board (REB) (aka IRB/Ethics Commitee) approval for the activity that is described and the justifications given for exemption from REB approval suggest deficiencies in the REB process. It is also important to focus on the “human participants” of research rather than the narrower “human subjects” definition as not all the humans that might be harmed by research are its direct subjects.
In this guest post, Claire Bessant, Northumbria University, Newcastle, looks into the phenomenon of ‘sharenting’. Her article is relevant to the Information Law and Policy Centre’s annual conference coming up in November – Children and Digital Rights: Regulating Freedoms and Safeguards.
A toddler with birthday cake smeared across his face, grins delightedly at his mother. Minutes later, the image appears on Facebook. A not uncommon scenario – 42% of UK parents share photos of their children online with half of these parents sharing photos at least once a month.
Welcome to the world of “sharenting” – where more than 80% of children are said to have an online presence by the age of two. This is a world where the average parent shares almost 1,500 images of their child online before their fifth birthday.
But while a recent report from OFCOM confirms many parents do share images of their children online, the report also indicates that more than half (56%) of parents don’t. Most of these non-sharenting parents (87%) actively choose not to do so to protect their children’s private lives.Over sharing
Parents often have good reasons for sharenting. It allows them to find and share parenting advice, to obtain emotional and practical support, and to maintain contact with relatives and friends.
Increasingly, though, concerns are being raised about “oversharenting” – when parents share too much, or, share inappropriate information. Sharenting can result in the identification of a child’s home, childcare or play location or the disclosure of identifying information which could pose risks to the child.
While many sharenters says they are conscious of the potential impact of their actions, and they consider their children’s views before sharenting, a recent House of Lords report on the matter suggests not all parents do. The “growing up with the internet” report reveals some parents share information they know will embarrass their children – and some never consider their children’s interests before they post.
A recent survey for CBBC Newsround also warns that a quarter of children who’ve had their photographs sharented have been embarrassed or worried by these actions.Think of the kids
Police in France and Germany have taken concrete steps to address sharenting concerns. They have posted Facebook warnings, telling parents of the dangers of sharenting, and stressing the importance of protecting children’s private lives.
Back in the UK, some academics have suggested the government should educate parents to ensure they understand the importance of protecting their child’s digital identity. But should the “nanny state” really be interfering in family life by telling parents how and when they can share their children’s information?
It’s clearly a tricky area to regulate, but it could be that the government’s recently published data protection bill may provide at least a partial answer.
In its 2017 manifesto, the Conservative party pledged to:
Give people new rights to ensure they are in control of their own data, including the ability to require major social media platforms to delete information.
In the recent Queen’s Speech, the government confirmed its commitment to reforming data protection law. And in August, it published a statement of intent providing more detail of its proposed reforms. In relation to the so-called “right to be forgotten” or “right to erasure”, the government states that:
Individuals will be able to ask for their personal data to be erased.
Users will also be able to ask social media platforms to delete information they posted during their childhood. In certain circumstances, social media companies will be required to delete any or all of a user’s posts. The statement explains:
For example, a post on social media made as a child would normally be deleted upon request, subject to very narrow exemptions.
The primary purpose of the data protection bill is to bring the new EU General Data Protection Regulation into UK law. This is to ensure UK law continues to accord with European data protection law post-Brexit – which is essential if UK companies are to continue to trade with their European counterparts.
It could also provide a solution for children whose parents like to sharent, because the new laws specify that an individual or organisation must obtain explicit consent or have some other legitimate basis to share an individual’s personal data. In real terms, this means that before a parent shares their child’s information online they should ask whether the child agrees.
Of course, this doesn’t mean parents are suddenly going to start asking for their children’s consent to sharent. But if a parent doesn’t obtain their child’s consent, or the child decides in the future that they are no longer happy for that sharented information to be online, the bill also provides another possible solution. Children could use the “right to erasure” to ask for social network providers and other websites to remove sharented information. Not perhaps a perfect answer, but for now it’s one way to put a stop to those embarrassing mugshots ending up in cyberspace for years to come.