So we’re back to protecting the children. And I’m not making light of that at all. CSAM – child
sexual abuse material – and online exploitation of children is so distasteful that it’s difficult to
talk about because that requires imagining something you’d much rather not. But it’s that power
that gives this a bit of a Trojan horse ability to slip past our defenses. Because there’s also a
very valid worry surrounding that once we have agreed to compromise our privacy for the very
best of reasons, our government or a foreign government or law enforcement might use their
then available access to our no longer truly private communications against us. Nowhere in the
EU’s pending surveillance legislation proposal is there any mention of terrorists or terrorism, but
it’s been voiced before and you can bet that it will come marching out again. And once
everyone’s communications is being screened for seductive content that might be considered
“grooming”, photos that might be naughty, and any other content that some automated bot
thinks should be brought to a human’s attention, what’s next? This is the very definition of a
slippery slope.


Document 52022PC0209 is titled “Proposal for a REGULATION OF THE EUROPEAN PARLIAMENT
AND OF THE COUNCIL laying down rules to prevent and combat child sexual abuse.” First of all,
it won’t prevent it. Nothing will. What it will do is drive that material to seek other channels. And
that’s not a bad thing. And I agree that it would likely combat the problem. The question is, is
this the best solution and what real price are we paying to make that possible? And of course,
what could possibly go wrong?


So what is essentially happening is that the EU is taking the next step. Over and ignoring the
loud and recently polled objections of 72% of European citizens, EU legislators are preparing to
move their current content screening Internet communications surveillance, which until now has
been voluntary, and as a consequence somewhat limited, to mandatory and therefore universal.

Okay. To recap how we got to where we are now…

Three years ago, in 2020, The European Commission proposed “temporary” legislation which
allowed for automated Internet communications surveillance for the purpose of screening
content for CSAM (child sexual abuse material).


The following summer, on July 6th 2021, the European Parliament adopted the legislation to
allow for this voluntary screening. And, as a result of this adoption, which they refer to as an
ePrivacy Derogation — in other words, creating a deliberate exception to ePrivacy for this
purpose — U.S. based providers such as Gmail, Outlook.com and Meta ‘s Facebook began
voluntarily screening for this content on some of their platforms. Notably, however, only those
very few providers have. The other providers of explicitly secure communications have not.


And so last summer, on May 11th, 2022, the Commission presented a proposal to move this
Internet surveillance from voluntary to mandatory for all service providers. As we noted when
this was last discussed in the context of Apple’s hastily abandoned proposal to provide
client-local image analysis by storing the hashes of known illegal images on the user’s phone,
the content to be examined includes not only images but also textual content which might be
considered solicitous of minors, known as “grooming.”

And most controversially, all of this would impact every EU citizen regardless of whether there
was any preceding suspicion of wrongdoing. Everyone’s visual and textual communications
would be, and apparently will soon be, surveilled.


Interestingly, the legality of this surveillance in the EU has already been challenged and
according to a judgment by the European Court of Justice, the permanent and general automatic
analysis of private communications violates fundamental rights. Nevertheless, the EU now
intends to adopt such legislation. For the court to subsequently annul it, can take years. By
which time the mandated systems will be established and in place.


Currently, meetings and hearings are underway. A Parliamentary vote is being held next month
in March, followed by various actions being taken throughout the rest of the year as required to
move the sure passage of this legislation through a large bureaucracy. After all, how does any
politician defend not wishing to protect the children? I’ve read a great deal of this proposal and
it has clearly been written to be rigorously defensible as a child protection act. Period. How do
you stand up and vote against that? It shows every indication of being adopted, with this
surveillance set to become mandatory in April of 2024.


“By introducing an obligation for providers to detect, report, block, and remove child sexual
abuse material from their services, the proposal enables improved detection, investigation and
prosecution of offenses under the Child Sexual Abuse Directive.”

“This proposal sets out targeted measures that are proportionate to the risk of misuse of a
given service for online child sexual abuse and are subject to robust conditions and
safeguards. [Oh! Well then, nothing to worry about.] It also seeks to ensure that providers
can meet their responsibilities, by establishing a European Centre to prevent and counter child
sexual abuse (‘the EU Centre’) to facilitate and support implementation of this Regulation and
thus help remove obstacles to the internal market, especially in connection to the obligations
of providers under this Regulation to detect online child sexual abuse, report it and remove
child sexual abuse material. In particular, the EU Centre will create, maintain and operate
databases of indicators of online child sexual abuse that providers will be required to use to
comply with the detection obligations.”

Why mandatory?

“The Impact Assessment shows that voluntary actions alone against online child sexual abuse
have proven insufficient, by virtue of their adoption by a small number of providers only, of the
considerable challenges encountered in the context of private-public cooperation in this field,
as well as of the difficulties faced by Member States in preventing the phenomenon and
guaranteeing an adequate level of assistance to victims. This situation has led to the adoption
of divergent sets of measures to fight online child sexual abuse in different Member States.
In the absence of Union action, legal fragmentation can be expected to develop further as
Member States introduce additional measures to address the problem at national level,
creating barriers to cross-border service provision on the Digital Single Market.”

Why is this a good thing to do?

“These measures would significantly reduce the violation of victims’ rights inherent in the
circulation of material depicting their abuse. These obligations, in particular the requirement to
detect new child sexual abuse materials and ‘grooming’, would result in the identification of
new victims and create a possibility for their rescue from ongoing abuse, leading to a
significant positive impact on their rights and society at large. The provision of a clear legal
basis for the mandatory detection and reporting of ‘grooming’ would also positively impact
these rights. Increased and more effective prevention efforts will also reduce the prevalence of
child sexual abuse, supporting the rights of children by preventing them from being victimised.
Measures to support victims in removing their images and videos would safeguard their rights
to protection of private and family life (privacy) and of personal data.”


So, this is clearly something that the EU is focused upon and is committed to seeing put into
action, to be in effect in the Spring of next year, 2024. And apparently, the EU has a legal system
much like the one which has evolved, or devolved, here in the US where the court system has
been layered with so many checks, balances and safeguards against misjudgments that years
will pass while challenges make their way through the courts.


Conspicuously missing from any of this proposed legislation is any apparent thought to how
exactly this will be accomplished. If I have an Android phone, whose job is it to watch and
analyze what images my camera captures, what images my phone receives, what textual
content I exchange? Is it the phone hardware provider’s job? Or is it the underlying Android OS?
Or is it the individual messaging application? It’s difficult to see how Signal and Telegram are
going to capitulate to this. And is it the possession of content or the transmission, reception and
communication of content? Can you record your own movies for local use?


The proposal establishes and funds the so-called “EU Centre” to serve as a central clearinghouse
for suspected illegal content. So, when an EU-based provider somehow detects something which
may be proscribed, the identity and current location of the suspected perpetrator, along with the
content in question, will be forwarded to the EU Centre for their analysis and action.
As I’ve been saying for years, this battle over the collision of cryptography and the state’s belief
in its need for surveillance is going to be a mess, and it’s far from over.


I have a link in the show notes to the full online legal proposal for anyone who’s interested in learning more. Wow.
https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=COM%3A2022%3A209%3AFIN&qid=1652

Security Now ep: 909 : https://www.grc.com/sn/sn-909-notes.pdf