25 Oct 2025: AI security trilemma; AI security compared to autoimmune disorders; autonomous AI malware; can AI be funny; really simple licensing

Agentic AI’s OODA Loop Problem

Another seminal post from Bruce Schneier on the security of AI systems. An AI agent is a system that runs in a loop. He uses the Observe-Orient-Decide-Act framework (originally developed for training US air force pilots but applied widely since) and shows how at each stage untrusted input can manipulate or subvert the agent. The reason this is such a good post is that he then adds two more great concepts.

The "AI security trilemma" is a version of the well known CAP theorem from distributed systems (you can have any two of consistency, availability or partition (network split) tolerance), or the similar rule of thumb in project management (you can have any two of cheap, fast and high quality).

This is the agentic AI security trilemma. Fast, smart, secure; pick any two. Fast and smart—you can’t verify your inputs. Smart and secure—you check everything, slowly, because AI itself can’t be used for this. Secure and fast—you’re stuck with models with intentionally limited capabilities.

He then goes on to compare AI systems inability to distinguish malicious prompts from legitimate instructions to an organism's immune system going wrong with an autoimmune disorder. The organism can't distinguish self from non-self, "or like oncogenes, the normal function and the malignant behavior share identical machinery."

Bonus interesting security link: LOLMIL: Living Off the Land Models and Inference Libraries (via ImportAI). This is a proof of concept of autonomous AI agent malware that iteratively writes and executes code using LLMs on the target device to achieve its nefarious aims. The degree of local intelligence will make this kind of approach much harder to counter.

Why is this funny? And why AI doesn’t know — yet

(Paywalled article - this is the archive link)

Bob Mankoff was for a long time the cartoon editor for the New Yorker, and was running a hugely popular caption contest from 1988 (the cartoonists draw an image; the readers suggest funny captions). It turns out for more than a decade this dataset has been used to attempt to train a funny algorithm, and Mankoff is co-author on multiple computational humour studies as well as having taught undergraduate humour theory. His work with a team at the University of Wisconsin continues, attempting to predict which caption is funnier from a set (and doing well at that now), and authoring captions given images.

Example of a pairwise comparison caption evaluation 


Recognising funny captions is far easier than writing them. The Wisconsin team found that humans overwhelmingly preferred human-authored captions to AI-generated ones. It might just be a matter of time.

Pay-per-output? AI firms blindsided by beefed up robots.txt instructions

The right of AI companies to crawl and train on web content has been a vexacious question; all the major LLMs are trained on vast corpuses gathered with little explicit licensing or permission. RSL (really simple licensing) is an attempt to create a new open standard whereby web content owners can specify licensing terms. The organisation behind it, RSL Collective, has some heavyweight folks like Eckart Walther, one of the co-creators of the RSS standard while at Netscape in 1999, and is gaining broad buy-in from publishers and content hosting sites like Reddit and Medium. Will it work, what's to stop AI crawlers just ignoring it? If the big content delivery networks like Fastly and Cloudflare get behind it, it could work, as a meaningful proportion of the web sits behind their systems. This is one to watch, as the economics of web crawling for AI training or on-demand content (during a deep research query or thinking phase) could change rapidly.