Between safety and privacy: do the ends justify the drastic means?


The discussion “Protecting Children Online: Between Safety and Privacy” organised by MEP Yana Toom (Centre Party/Renew Europe) in cooperation with the European Parliament Information Office in Estonia was held on June the 2nd in Talliinn and streamed online in the Facebook and YouTube.

While opening the discussion MEP Yana Toom noted that in 2021, there was a 34% increase in reports of child sexual abuse online. “This is a growing problem and we need to fight it. We have a regulation which is in place now, it is valid till August 2024. It allows the processing of personal data and communications to fight child sexual abuse material (CSAM). The regulation makes it possible for social media and chat services to detect, report and remove sexual abuse material on a voluntary basis. But now the Commission has made a new proposal which requires that websites, social media and chat services proactively search for sexual abuse material and grooming activities, approaching children to convince them to make sexual images of themselves and share it online. Websites and chat services need to assess whether the services are at risk of being used to spread child sexual abuse material.  They need to consider having age verification to limit children from using high risk services. National authorities would evaluate the risk assessment and can decide requests and detection orders when the services are too high risk.”

The digital rights defence was represented by Ella Yakubowska, the Senior Policy Advisor at European Digital Rights (EDRi). “There is a legitimate reason for governments to be taking measures and requiring companies to take certain measures to protect children”, she admitted. “But what we have on the table goes too far from the human rights and technology perspectives. Under this law, all of our communications on encrypted message platforms like WhatsApp, Signal, Threema would be considered at a high risk of being child sexual abuse. This proposal would force private companies to scan all of those communications that we're sending on those platforms and to use AI-based technologies to analyze it and flag to police if they believe that we're doing anything suspicious.”

Yakubowska added: “If we say that in the EU it's OK to indiscriminately scan private messages, we're also giving a carte blanche to China, Iran, other oppressive regimes, to insert technologies into our communications that can easily be used for a wide variety of ways to track us, monitor what we say and do, censor us. And that is just a step towards an authoritarian internet that it is important to push back against.”

Milan Zubicek, Head of Content Cluster, EU Institutions at Google, emphasized that it is important to allow companies to continue investing in revealing of CSAM on a voluntary basis. “We understand that this is not enough and the authorities need to have some tools. We speak about detection orders, and we think there needs to be additional safeguards to these orders. From our point of view it is important to clarify the scope: which services are in the scope, which obligations apply to them. It should be efficient, targeted and proportionate.”

The position of META was also represented: the corporation supports the initiative in order to fight child sexual abuse material. At the moment they are using metadata in order to take groups offline, meaning that instead of scanning the messages they scan for suspicious behavior. META believe that it's possible to find CSAM in encrypted message without reading the content of the message and deploying problematic technologies such as client side scanning. However, they require a legal basis for this, and the new regulation does not provide them with such basis.

Kerli Valner, the communication coordinator of “Smartly on the web” project at Estonian Union for Child Welfare, pointed out that the problem of CSAM is overwhelming: “Since 2011, we have received more than 7,000 reports, of which around 2,000 reports have been about child sexual abuse images. We welcome this regulation for fighting more effectively and in a timely manner against online child sexual abuse. There should be good cooperation between EU states as well as at the international level. As children are active users of different digital technologies and platforms, it is important to ensure their participation. At the same time, we need to give them the necessary support and protection and also respect their privacy. So we need to find the right balance.”

Maarja Kärson, advisor of Department of Family Wellbeing and Safe Relationships at the Ministry of Social Affairs of Estonia, added that in such a small country as Estonia last year there were 250 non-contact child sexual abuse cases, out of which nearly 90% were on the internet. She said that when the proposal was presented, the Estonian government was supportive of its aim. “We also raised several problems and issues that needed to be considered. We do support this proposal. However, we don't oppose the discussion concerning privacy rights. It's also important to understand that it's not a question about children's rights vs. privacy. Privacy is also a child’s right. Children are digital citizens that use all these same channels. Their privacy also needs to be protected. When we think about the development of sexuality, this is a sensitive issue for young people and we want this area of their lives to be untouched in a sense that there will be no intrusions.”

According to Henrik Trasberg, Legal Adviser on New Technologies and Digitalisation at the Ministry of Justice of Estonia, “if a platform uses end-to-end encryption, it cannot be forced to break it. It's the position of the Estonian government. The other part is what material the proposal regulates. It has different levels of risk to privacy. If we are trying to find existing child sexual abuse material and then look through content to find existing material, this is much more OK from a privacy perspective compared to when we try to just proactively detect unknown material. In one case we are looking for something specific, in the other case, we are just doing a general filtering. And the regulation doesn't provide safeguards.”

Yana Toom put a widespread question: “If AI is following my messages, is this really infringement of privacy?”

“AI will initially be the system that reviews the messages,” Ella Yakubowska said. “But under this proposal, it's mandatory to send everything that isn't manifestly unfounded to be child sexual abuse material to platforms that have to send it on to the EU center, that has to send it on to national police. So it's not just AI. Meanwhile in Ireland, of all the machine-generated reports that were sent to the national police in the last five years, at least 500 people were found to be completely innocent. It was exactly "children playing on a beach". Or adults sharing nude, consensual imagery with their partners. And yet these 500 people had investigations pursued against them. Their data was in a database connecting them to this crime of child sexual abuse, even though it had been proven that they were innocent.”

You can watch the 15 min highlights from the discussion above or in YouTube (in English). There is also the full version available here (also in English).