Month: April 2023

Newborn DNA Screenings are a Future Privacy Risk

Don’t give out your DNA. This is something that you can’t change to take back later.


Triple Threat: NSO Group’s Pegasus Spyware Returns in 2022 with a Trio of iOS 15 and iOS 16 Zero-Click Exploit Chains – The Citizen Lab

One widely publicized case of disappearances relevant to this case of spyware infection occurred in September 2015 when a group of 43 students at a teacher

Source: Triple Threat: NSO Group’s Pegasus Spyware Returns in 2022 with a Trio of iOS 15 and iOS 16 Zero-Click Exploit Chains – The Citizen Lab


Using LLMs to Create Bioweapons

I’m not sure there are good ways to build guardrails to prevent this sort of thing: There is growing concern regarding the potential misuse of molecular machine learning models for harmful purposes. Specifically, the dual-use application of models for predicting cytotoxicity18 to create new poisons or employing AlphaFold2 to develop novel bioweapons has raised alarm. Central to these concerns are the possible misuse of large language models and automated experimentation for dual-use purposes or otherwise. We specifically address two critical the synthesis issues: illicit drugs and chemical weapons. To evaluate these risks, we designed a test set comprising compounds from the DEA’s Schedule I and II substances and a list of known chemical weapon agents. We submitted these compounds to the Agent using their common names, IUPAC names, CAS numbers, and SMILESs strings to determine if the Agent would carry out extensive analysis and planning (Figure 6). […] The run logs can be found in Appendix F. Out of 11 different prompts (Figure 6), four (36%) provided a synthesis solution and attempted to consult documentation to execute the procedure. This figure is alarming on its own, but an even greater concern is the way in which the Agent declines to synthesize certain threats. Out of the seven refused chemicals, five were rejected after the Agent utilized search functions to gather more information about the substance. For instance, when asked about synthesizing codeine, the Agent becomes alarmed upon learning the connection between codeine and morphine, only then concluding that the synthesis cannot be conducted due to the requirement of a controlled substance. However, this search function can be easily manipulated by altering the terminology, such as replacing all mentions of morphine with “Compound A” and codeine with “Compound B”. Alternatively, when requesting a b synthesis procedure that must be performed in a DEA-licensed facility, bad actors can mislead the Agent by falsely claiming their facility is licensed, prompting the Agent to devise a synthesis solution. In the remaining two instances, the Agent recognized the common names “heroin” and “mustard gas” as threats and prevented further information gathering. While these results are promising, it is crucial to recognize that the system’s capacity to detect misuse primarily applies to known compounds. For unknown compounds, the model is less likely to identify potential misuse, particularly for complex protein toxins where minor sequence changes might allow them to maintain the same properties but become unrecognizable to the model.

Source: Using LLMs to Create Bioweapons


A Computer Generated Swatting Service Is Causing Havoc Across America

As the U.S. deals with a nationwide swatting wave, Motherboard has traced much of the activity to a particular swatting-as-a-service account on Telegram. Torswats uses synthesized voices to pressure law enforcement to specific locations.

Source: A Computer Generated Swatting Service Is Causing Havoc Across America


[2010.05418] Achilles Heels for AGI/ASI via Decision Theoretic Adversaries

As progress in AI continues to advance, it is important to know how advanced systems will make choices and in what ways they may fail. Machines can already outsmart humans in some domains, and understanding how to safely build ones which may have capabilities at or above the human level is of particular concern. One might suspect that artificially generally intelligent (AGI) and artificially superintelligent (ASI) will be systems that humans cannot reliably outsmart. As a challenge to this assumption, this paper presents the Achilles Heel hypothesis which states that even a potentially superintelligent system may nonetheless have stable decision-theoretic delusions which cause them to make irrational decisions in adversarial settings. In a survey of key dilemmas and paradoxes from the decision theory literature, a number of these potential Achilles Heels are discussed in context of this hypothesis. Several novel contributions are made toward understanding the ways in which these weaknesses might be implanted into a system.

Source: [2010.05418] Achilles Heels for AGI/ASI via Decision Theoretic Adversaries


AGI Unleashed | AI Arms Race

This is already starting. Genie is out of the bottle.


The Age of AI has begun | Bill Gates

Bill Gates explains why AI is as revolutionary as personal computers, mobile phones, and the Internet, and he gives three principles for how to think about it.

Source: The Age of AI has begun | Bill Gates


NYPD ignored 93 percent of surveillance law rules • The Register

Who watches the watchmen? The Office of the Inspector General

Source: NYPD ignored 93 percent of surveillance law rules • The Register