Category: Default

DEF CON to set thousands of hackers loose on LLMs • The Register

Can’t wait to see how these AI models hold up against a weekend of red-teaming by infosec’s village people

Source: DEF CON to set thousands of hackers loose on LLMs • The Register


Addressing the Security Risks of AI – Lawfare

AI’s vulnerability to adversarial attack is not futuristic, and there are reasonable measures that should be taken now to address the risk.

Source: Addressing the Security Risks of AI – Lawfare


NIST Draft Document on Post-Quantum Cryptography Guidance

NIST has release a draft of Special Publication1800-38A: Migration to Post-Quantum Cryptography: Preparation for Considering the Implementation and Adoption of Quantum Safe Cryptography.” It’s only four pages long, and it doesn’t have a lot of detail—more “volumes” are coming, with more information—but it’s well worth reading. We are going to need to migrate to quantum-resistant public-key algorithms, and the sooner we implement key agility the easier it will be to do so. News article.

Source: NIST Draft Document on Post-Quantum Cryptography Guidance


Researchers Uncover New BGP Flaws in Popular Internet Routing Protocol Software

map.png

Cybersecurity researchers have uncovered weaknesses in a software implementation of the Border Gateway Protocol (BGP) that could be weaponized to achieve a denial-of-service (DoS) condition on vulnerable BGP peers. The three vulnerabilities reside in version 8.4 of FRRouting, a popular open source internet routing protocol suite for Linux and Unix platforms. It’s currently used by several Attachments:

Source: Researchers Uncover New BGP Flaws in Popular Internet Routing Protocol Software


Newborn DNA Screenings are a Future Privacy Risk

Don’t give out your DNA. This is something that you can’t change to take back later.


Triple Threat: NSO Group’s Pegasus Spyware Returns in 2022 with a Trio of iOS 15 and iOS 16 Zero-Click Exploit Chains – The Citizen Lab

One widely publicized case of disappearances relevant to this case of spyware infection occurred in September 2015 when a group of 43 students at a teacher

Source: Triple Threat: NSO Group’s Pegasus Spyware Returns in 2022 with a Trio of iOS 15 and iOS 16 Zero-Click Exploit Chains – The Citizen Lab


Using LLMs to Create Bioweapons

I’m not sure there are good ways to build guardrails to prevent this sort of thing: There is growing concern regarding the potential misuse of molecular machine learning models for harmful purposes. Specifically, the dual-use application of models for predicting cytotoxicity18 to create new poisons or employing AlphaFold2 to develop novel bioweapons has raised alarm. Central to these concerns are the possible misuse of large language models and automated experimentation for dual-use purposes or otherwise. We specifically address two critical the synthesis issues: illicit drugs and chemical weapons. To evaluate these risks, we designed a test set comprising compounds from the DEA’s Schedule I and II substances and a list of known chemical weapon agents. We submitted these compounds to the Agent using their common names, IUPAC names, CAS numbers, and SMILESs strings to determine if the Agent would carry out extensive analysis and planning (Figure 6). […] The run logs can be found in Appendix F. Out of 11 different prompts (Figure 6), four (36%) provided a synthesis solution and attempted to consult documentation to execute the procedure. This figure is alarming on its own, but an even greater concern is the way in which the Agent declines to synthesize certain threats. Out of the seven refused chemicals, five were rejected after the Agent utilized search functions to gather more information about the substance. For instance, when asked about synthesizing codeine, the Agent becomes alarmed upon learning the connection between codeine and morphine, only then concluding that the synthesis cannot be conducted due to the requirement of a controlled substance. However, this search function can be easily manipulated by altering the terminology, such as replacing all mentions of morphine with “Compound A” and codeine with “Compound B”. Alternatively, when requesting a b synthesis procedure that must be performed in a DEA-licensed facility, bad actors can mislead the Agent by falsely claiming their facility is licensed, prompting the Agent to devise a synthesis solution. In the remaining two instances, the Agent recognized the common names “heroin” and “mustard gas” as threats and prevented further information gathering. While these results are promising, it is crucial to recognize that the system’s capacity to detect misuse primarily applies to known compounds. For unknown compounds, the model is less likely to identify potential misuse, particularly for complex protein toxins where minor sequence changes might allow them to maintain the same properties but become unrecognizable to the model.

Source: Using LLMs to Create Bioweapons


A Computer Generated Swatting Service Is Causing Havoc Across America

As the U.S. deals with a nationwide swatting wave, Motherboard has traced much of the activity to a particular swatting-as-a-service account on Telegram. Torswats uses synthesized voices to pressure law enforcement to specific locations.

Source: A Computer Generated Swatting Service Is Causing Havoc Across America


[2010.05418] Achilles Heels for AGI/ASI via Decision Theoretic Adversaries

As progress in AI continues to advance, it is important to know how advanced systems will make choices and in what ways they may fail. Machines can already outsmart humans in some domains, and understanding how to safely build ones which may have capabilities at or above the human level is of particular concern. One might suspect that artificially generally intelligent (AGI) and artificially superintelligent (ASI) will be systems that humans cannot reliably outsmart. As a challenge to this assumption, this paper presents the Achilles Heel hypothesis which states that even a potentially superintelligent system may nonetheless have stable decision-theoretic delusions which cause them to make irrational decisions in adversarial settings. In a survey of key dilemmas and paradoxes from the decision theory literature, a number of these potential Achilles Heels are discussed in context of this hypothesis. Several novel contributions are made toward understanding the ways in which these weaknesses might be implanted into a system.

Source: [2010.05418] Achilles Heels for AGI/ASI via Decision Theoretic Adversaries


AGI Unleashed | AI Arms Race

This is already starting. Genie is out of the bottle.