Month: February 2023

Banning TikTok

Congress is currently debating bills that would ban TikTok in the United States. We are here as technologists to tell you that this is a terrible idea and the side effects would be intolerable. Details matter. There are several ways Congress might ban TikTok, each with different efficacies and side effects. In the end, all the effective ones would destroy the free Internet as we know it. There’s no doubt that TikTok and ByteDance, the company that owns it, are shady. They, like most large corporations in China, operate at the pleasure of the Chinese government. They collect extreme levels of information about users. But they’re not alone: Many apps you use do the same, including Facebook and Instagram, along with seemingly innocuous apps that have no need for the data. Your data is bought and sold by data brokers you’ve never heard of who have few scruples about where the data ends up. They have digital dossiers on most people in the United States. If we want to address the real problem, we need to enact serious privacy laws, not security theater, to stop our data from being collected, analyzed, and sold—by anyone. Such laws would protect us in the long term, and not just from the app of the week. They would also prevent data breaches and ransomware attacks from spilling our data out into the digital underworld, including hacker message boards and chat servers, hostile state actors, and outside hacker groups. And, most importantly, they would be compatible with our bedrock values of free speech and commerce, which Congress’s current strategies are not. At best, the TikTok ban considered by Congress would be ineffective; at worst, a ban would force us to either adopt China’s censorship technology or create our own equivalent. The simplest approach, advocated by some in Congress, would be to ban the TikTok app from the Apple and Google app stores. This would immediately stop new updates for current users and prevent new users from signing up. To be clear, this would not reach into phones and remove the app. Nor would it prevent Americans from installing TikTok on their phones; they would still be able to get it from sites outside of the United States. Android users have long been able to use alternative app repositories. Apple maintains a tighter control over what apps are allowed on its phones, so users would have to “jailbreak”—or manually remove restrictions from—their devices to install TikTok. Even if app access were no longer an option, TikTok would still be available more broadly. It is currently, and would still be, accessible from browsers, whether on a phone or a laptop. As long as the TikTok website is hosted on servers outside of the United States, the ban would not affect browser access. Alternatively, Congress might take a financial approach and ban US companies from doing business with ByteDance. Then-President Donald Trump tried this in 2020, but it was blocked by the courts and rescinded by President Joe Biden a year later. This would shut off access to TikTok in app stores and also cut ByteDance off from the resources it needs to run TikTok. US cloud-computing and content-distribution networks would no longer distribute TikTok videos, collect user data, or run analytics. US advertisers—and this is critical—could no longer fork over dollars to ByteDance in the hopes of getting a few seconds of a user’s attention. TikTok, for all practical purposes, would cease to be a business in the United States. But Americans would still be able to access TikTok through the loopholes discussed above. And they will: TikTok is one of the most popular apps ever made; about 70% of young people use it. There would be enormous demand for workarounds

Source: Banning TikTok

Planning for AGI and beyond

Our mission is to ensure that artificial general intelligence—AI systems that are generally smarter than humans—benefits all of humanity.If AGI is successfully created, this technology could help us elevate humanity by increasing abundance, turbocharging the global economy, and aiding in the discovery of new scientific knowledge that

Source: Planning for AGI and beyond

The “AI Having Nuclear Launch Codes” Debate Has Begun

Nick has a great chat about AI and weapons. Remember how scared the government was of Kevin Mitnick and that they thought that he could whistle the nuclear launch codes from the toy out of the Capt N Crunch cereal box? However, it’s OK for AI to possess such power.

Planting Undetectable Backdoors in Machine Learning Models : [Extended Abstract] | IEEE Conference Publication | IEEE Xplore

Given the computational cost and technical expertise required to train machine learning models, users may delegate the task of learning to a service provider. Delegation of learning has clear benefits, and at the same time raises serious concerns of trust. This work studies possible abuses of power by untrusted learners.We show how a malicious learner can plant an undetectable backdoor into a classifier. On the surface, such a backdoored classifier behaves normally, but in reality, the learner maintains a mechanism for changing the classification of any input, with only a slight perturbation. Importantly, without the appropriate “backdoor key,” the mechanism is hidden and cannot be detected by any computationally-bounded observer. We demonstrate two frameworks for planting undetectable backdoors, with incomparable guarantees.•First, we show how to plant a backdoor in any model, using digital signature schemes. The construction guarantees that given query access to the original model and the backdoored version, it is computationally infeasible to find even a single input where they differ. This property implies that the backdoored model has generalization error comparable with the original model. Moreover, even if the distinguisher can request backdoored inputs of its choice, they cannot backdoor a new input—a property we call non-replicability.•Second, we demonstrate how to insert undetectable backdoors in models trained using the Random Fourier Features (RFF) learning paradigm (Rahimi, Recht; NeurIPS 2007). In this construction, undetectability holds against powerful white-box distinguishers: given a complete description of the network and the training data, no efficient distinguisher can guess whether the model is “clean” or contains a backdoor. The backdooring algorithm executes the RFF algorithm faithfully on the given training data, tampering only with its random coins. We prove this strong guarantee under the hardness of the Continuous Learning With Errors problem (Bruna, Regev, Song, Tang; STOC 2021). We show a similar white-box undetectable backdoor for random ReLU networks based on the hardness of Sparse PCA (Berthet, Rigollet; COLT 2013).Our construction of undetectable backdoors also sheds light on the related issue of robustness to adversarial examples. In particular, by constructing undetectable backdoor for an “adversarially-robust” learning algorithm, we can produce a classifier that is indistinguishable from a robust classifier, but where every input has an adversarial example! In this way, the existence of undetectable backdoors represent a significant theoretical roadblock to certifying adversarial robustness.

Source: Planting Undetectable Backdoors in Machine Learning Models : [Extended Abstract] | IEEE Conference Publication | IEEE Xplore

US Supreme Court declines to hear NSA spying complaint • The Register

Warrantless data harvesting, you say? Feds have their secret reasons and we’re OK with that

Source: US Supreme Court declines to hear NSA spying complaint • The Register

Google will boost Android security through firmware hardening

Google has presented a plan to strengthen the firmware security on secondary Android SoCs (systems on a chip) by introducing mechanisms like control flow integrity, memory safety systems, and compiler-based sanitizers. […]

Source: Google will boost Android security through firmware hardening

Is Apple lying?

Don’t always believe them at face value. All big tech companies lie!


I’ve been using Safing’s PortMaster for a bit now. I support this developer. Here’s their latest blog post.



The Hidden Networks: Onion Routing, TOR, Lokinet, I2P, Freenet

Rob explains in detail about the above networks.

The European Union’s Internet Surveillance Proposal

So we’re back to protecting the children. And I’m not making light of that at all. CSAM – child
sexual abuse material – and online exploitation of children is so distasteful that it’s difficult to
talk about because that requires imagining something you’d much rather not. But it’s that power
that gives this a bit of a Trojan horse ability to slip past our defenses. Because there’s also a
very valid worry surrounding that once we have agreed to compromise our privacy for the very
best of reasons, our government or a foreign government or law enforcement might use their
then available access to our no longer truly private communications against us. Nowhere in the
EU’s pending surveillance legislation proposal is there any mention of terrorists or terrorism, but
it’s been voiced before and you can bet that it will come marching out again. And once
everyone’s communications is being screened for seductive content that might be considered
“grooming”, photos that might be naughty, and any other content that some automated bot
thinks should be brought to a human’s attention, what’s next? This is the very definition of a
slippery slope.

Document 52022PC0209 is titled “Proposal for a REGULATION OF THE EUROPEAN PARLIAMENT
AND OF THE COUNCIL laying down rules to prevent and combat child sexual abuse.” First of all,
it won’t prevent it. Nothing will. What it will do is drive that material to seek other channels. And
that’s not a bad thing. And I agree that it would likely combat the problem. The question is, is
this the best solution and what real price are we paying to make that possible? And of course,
what could possibly go wrong?

So what is essentially happening is that the EU is taking the next step. Over and ignoring the
loud and recently polled objections of 72% of European citizens, EU legislators are preparing to
move their current content screening Internet communications surveillance, which until now has
been voluntary, and as a consequence somewhat limited, to mandatory and therefore universal.

Okay. To recap how we got to where we are now…

Three years ago, in 2020, The European Commission proposed “temporary” legislation which
allowed for automated Internet communications surveillance for the purpose of screening
content for CSAM (child sexual abuse material).

The following summer, on July 6th 2021, the European Parliament adopted the legislation to
allow for this voluntary screening. And, as a result of this adoption, which they refer to as an
ePrivacy Derogation — in other words, creating a deliberate exception to ePrivacy for this
purpose — U.S. based providers such as Gmail, and Meta ‘s Facebook began
voluntarily screening for this content on some of their platforms. Notably, however, only those
very few providers have. The other providers of explicitly secure communications have not.

And so last summer, on May 11th, 2022, the Commission presented a proposal to move this
Internet surveillance from voluntary to mandatory for all service providers. As we noted when
this was last discussed in the context of Apple’s hastily abandoned proposal to provide
client-local image analysis by storing the hashes of known illegal images on the user’s phone,
the content to be examined includes not only images but also textual content which might be
considered solicitous of minors, known as “grooming.”

And most controversially, all of this would impact every EU citizen regardless of whether there
was any preceding suspicion of wrongdoing. Everyone’s visual and textual communications
would be, and apparently will soon be, surveilled.

Interestingly, the legality of this surveillance in the EU has already been challenged and
according to a judgment by the European Court of Justice, the permanent and general automatic
analysis of private communications violates fundamental rights. Nevertheless, the EU now
intends to adopt such legislation. For the court to subsequently annul it, can take years. By
which time the mandated systems will be established and in place.

Currently, meetings and hearings are underway. A Parliamentary vote is being held next month
in March, followed by various actions being taken throughout the rest of the year as required to
move the sure passage of this legislation through a large bureaucracy. After all, how does any
politician defend not wishing to protect the children? I’ve read a great deal of this proposal and
it has clearly been written to be rigorously defensible as a child protection act. Period. How do
you stand up and vote against that? It shows every indication of being adopted, with this
surveillance set to become mandatory in April of 2024.

“By introducing an obligation for providers to detect, report, block, and remove child sexual
abuse material from their services, the proposal enables improved detection, investigation and
prosecution of offenses under the Child Sexual Abuse Directive.”

“This proposal sets out targeted measures that are proportionate to the risk of misuse of a
given service for online child sexual abuse and are subject to robust conditions and
safeguards. [Oh! Well then, nothing to worry about.] It also seeks to ensure that providers
can meet their responsibilities, by establishing a European Centre to prevent and counter child
sexual abuse (‘the EU Centre’) to facilitate and support implementation of this Regulation and
thus help remove obstacles to the internal market, especially in connection to the obligations
of providers under this Regulation to detect online child sexual abuse, report it and remove
child sexual abuse material. In particular, the EU Centre will create, maintain and operate
databases of indicators of online child sexual abuse that providers will be required to use to
comply with the detection obligations.”

Why mandatory?

“The Impact Assessment shows that voluntary actions alone against online child sexual abuse
have proven insufficient, by virtue of their adoption by a small number of providers only, of the
considerable challenges encountered in the context of private-public cooperation in this field,
as well as of the difficulties faced by Member States in preventing the phenomenon and
guaranteeing an adequate level of assistance to victims. This situation has led to the adoption
of divergent sets of measures to fight online child sexual abuse in different Member States.
In the absence of Union action, legal fragmentation can be expected to develop further as
Member States introduce additional measures to address the problem at national level,
creating barriers to cross-border service provision on the Digital Single Market.”

Why is this a good thing to do?

“These measures would significantly reduce the violation of victims’ rights inherent in the
circulation of material depicting their abuse. These obligations, in particular the requirement to
detect new child sexual abuse materials and ‘grooming’, would result in the identification of
new victims and create a possibility for their rescue from ongoing abuse, leading to a
significant positive impact on their rights and society at large. The provision of a clear legal
basis for the mandatory detection and reporting of ‘grooming’ would also positively impact
these rights. Increased and more effective prevention efforts will also reduce the prevalence of
child sexual abuse, supporting the rights of children by preventing them from being victimised.
Measures to support victims in removing their images and videos would safeguard their rights
to protection of private and family life (privacy) and of personal data.”

So, this is clearly something that the EU is focused upon and is committed to seeing put into
action, to be in effect in the Spring of next year, 2024. And apparently, the EU has a legal system
much like the one which has evolved, or devolved, here in the US where the court system has
been layered with so many checks, balances and safeguards against misjudgments that years
will pass while challenges make their way through the courts.

Conspicuously missing from any of this proposed legislation is any apparent thought to how
exactly this will be accomplished. If I have an Android phone, whose job is it to watch and
analyze what images my camera captures, what images my phone receives, what textual
content I exchange? Is it the phone hardware provider’s job? Or is it the underlying Android OS?
Or is it the individual messaging application? It’s difficult to see how Signal and Telegram are
going to capitulate to this. And is it the possession of content or the transmission, reception and
communication of content? Can you record your own movies for local use?

The proposal establishes and funds the so-called “EU Centre” to serve as a central clearinghouse
for suspected illegal content. So, when an EU-based provider somehow detects something which
may be proscribed, the identity and current location of the suspected perpetrator, along with the
content in question, will be forwarded to the EU Centre for their analysis and action.
As I’ve been saying for years, this battle over the collision of cryptography and the state’s belief
in its need for surveillance is going to be a mess, and it’s far from over.

I have a link in the show notes to the full online legal proposal for anyone who’s interested in learning more. Wow.

Security Now ep: 909 :