Hooked on Mnemonics Worked for Me
Codex’s Model Interaction & Inter-process Communication
Agentic AI Security: Reviewing the Past to Predict the Future
OpenAI recently posted a role for a Cybersecurity Landscape Analyst within their Intelligence and Investigation team. One line stood out:
“Develop forward-looking assessments of how cyber threats may evolve over 6–24 months.”
To predict the future of Agentic AI, we only need to look to the past. Agentic AI security is not emerging from nothing. It is replaying the same history as traditional computing security, but within a compressed timeline.
As of this writing, prompt injection is a commonly discussed attack vector against LLM-based systems. At its core, prompt injection exists because LLMs are sequence predictors with no native separation between trusted control instructions (system prompts) and untrusted input (user data). This is not a new problem. This is basically Intel x86 in Real Mode.
In Real Mode, code, data, the stack, and even the interrupt vector table all share the same memory space. There is no privilege separation. Any instruction can jump anywhere, overwrite anything, and execute without restriction. The fundamental issue is identical: no boundary between control and data. Detection strategies in that era relied on pattern matching, heuristics, checksums, and runtime hooking. Modern defenses against prompt injection, such as guardrails, input filtering, and heuristic detection, are not that different. They are variations of the same reactive strategies used before architectural fixes existed.
What about forward-looking cyber threats like the first Agentic-AI worm? For this example, we could consider the Morris Worm in 1988. Its success was not due to a single vulnerability, but an environment characterized by high trust between systems, widespread exposure of network services, weak authentication mechanisms, and a highly connected user base.
Now map this to Agentic AI. Instead of network services like sendmail, finger, or rsh, we have tool-enabled agents such as OpenClaw. Instead of academic researchers, we have early adopters rapidly integrating these systems into real workflows. Instead of BSD Unix systems in academic environments, we have Mac Minis showing up in homes and offices because people want to run OpenClaw locally. Instead of executable payloads, we have prompts. The conditions for a worm are the same: trust, connectivity, and execution capability. What is currently missing is density. There are not yet enough interconnected, tool-enabled systems for large-scale, worm-like propagation comparable to the Morris Worm or Slammer.
My theory is that the same threats, along with the security mitigations developed to address them since the 60s and 70s, will replay themselves within the microcosm of Agentic AI. We are currently in DOS Mode for Agentic AI.
Update: A colleague shared the following link
https://arxiv.org/abs/2403.02817
LLMs != Security Products
Cybersecurity stocks took a dive after Anthropic released a blog post titled “Making frontier cybersecurity capabilities available to defenders". What stood out was not the post itself, but the market reaction. Companies tied to endpoint protection, cloud security, and other traditional cybersecurity products were affected, even though the post had little direct relevance to those companies.
That reaction highlights a disconnect between the perceived capabilities of “AI” and its actual impact on cybersecurity products, a disconnect that likely extends beyond the market. To make sense of that gap, it helps to start with what is actually meant by ‘AI’ in this context. Usage of the term AI (short for Artificial Intelligence) has increased sharply since the release of ChatGPT in November of 2022. In practice, much of what is labeled “AI” today is better described as large language models (LLMs). For readers unfamiliar with LLMs, a common definition is:
“A large language model (LLM) is a type of artificial intelligence that can understand and create human language. These models learn by studying huge amounts of text from books, websites, and other sources.”
What makes LLMs fascinating and applicable to our modern life is how they solved (on a surface level) a field of AI called Natural Language Processing (NLP). For readers not familiar with NLP, autocomplete, email spam filters and auto-correct are all examples of NLP. Here is a definition of NLP.
“A field in Artificial Intelligence, and also related to linguistics, focused on enabling computers to understand and generate human language.”
Long-time readers of this blog may recall that I previously used a sub-field of NLP, Natural Language Generation (NLG) to automatically create descriptions of disassembled functions via API calls. On their own, LLMs require text for both training and inference. They are not autonomous systems; without prompts, they do not function. This distinction is important when discussing AI and cybersecurity, because evaluating or classifying security events requires context that does not exist as text as input to a prompt. That context has to be generated by additional software.
Generating the context requires an understanding and access to the complete lifecycle of the security event that is being used for the context. Walking through this lifecycle matters because it highlights how much logic exists before an event ever becomes text.
A classic example of a security event is a process initiating an outbound network connection directly to an IP address. How that event is handled varies widely depending on the type of security product and where it operates in the OSI model. For this example, assume the product operates at Layer 7, the application layer. The event pipeline in this case includes several distinct steps. A kernel-mode driver or user-mode component monitors process creation and relevant networking APIs. The destination IP address is evaluated to ensure it is not local, then serialized into text and logged. That log data is subsequently forwarded to a file-based or cloud-based centralized logging system. Even this simplified path omits important actions such as blocking the connection or terminating the process. Writing code is not the same as building a security product, and LLMs do not possess the authority or signal access required to determine whether an IP address is benign or malicious. An LLM can describe an alert very well; it cannot, on its own, determine whether that alert represents malicious behavior without pre-existing detection logic, telemetry, or intelligence-derived indicators of compromise.
In practice, an agent is an LLM placed inside a loop, where it can inspect the current state of a system, run tools or commands, observe the results, and decide what to do next until it reaches some stopping point. Without the output of those tools and commands, the LLM provides no value; it has nothing to reason over. The surrounding software is what produces the text that gives the model context.
As of this publication date, LLMs are not going to replace cybersecurity products. These systems are large, long-lived codebases, and their value is not defined by code generation alone. What matters is the telemetry collected and the logic built on top of that telemetry to determine whether the text describing an event represents something benign or something malicious. Large language models can help explain security events, but they don’t replace the systems that detect them, and confusing the two is how markets end up reacting to the wrong things.
msdocsviewer
Hello,
I forgot to post a recent IDAPython plugin that I created for viewing Microsoft SDK documentation in IDA. Here is an example screenshot of msdocsviewer .
The repository for the plugin can be found here.
Function Trapper Keeper - An IDA Plugin For Function Notes
Function Trapper Keeper is an IDA plugin for writing and storing function notes in IDBs, it’s a middle ground between function comments and IDA’s Notepad. It’s a tool that I have wanted for a while. To understand why I wanted Function Trapper Keeper, it might be worth describing my process of reverse engineering a binary in IDA.
Upon opening a binary, I always take note of the code to data ratio. This is can be inferred by looking at the Navigator band in IDA. If there is more data than code in the binary, it can hint that the binary is packed or encrypted. If so, I typically stop the triage of the binary to start searching for cross-references to the data. In many instances the cross-references can lead to code used for decompressing or decrypting the data. For example, if the binary is a loader it would contain the second stage payload encrypted or some other form of obfuscation. By cross-referencing the data and finding the decryption routine of the loader, I can quickly pivot to extracting the payload. Another notable ratio is if the data or code is not consistent. If the code changes from data to code and back, it is likely that the analysis process of IDA found inconsistencies in the disassembled functions. This could be from anti-disassemblers, flawed memory dumps or something else that needs attention. After the ratios, I look at the strings. I look for the presence of compilers strings, strings related to DLLs and APIs, user defined strings or the lack of user defined strings. If the latter, I’ll start searching for the presence of encrypted strings and then cross-referencing their usage. This can help find the function responsible for string decryption. If I can’t find the string decryption routine, I’ll use some automation to find all references to XOR instructions. After reviewing strings, I’ll do a quick triage of imported function. I like to look for sets of APIs that I know are related to certain functionality. For example, if I see calls to VirtualAlloc, VirtualProtect and CreateRemoteThread, I can infer that process injection is potentially present in the binary.
After the previously described triage, I have high-level overview of the binary and usually know if I should do a deep dive of the binary or if I need to focus on certain functionality (encrypted strings, unpacked, etc). If I’m doing a deep dive I like to label all functions. For my IDBs, the name of the function hints at my level of understanding of the function. The more descriptive the function name, the more I know about it. If I know the function does process injection into explorer.exe I might name it “inject_VirtRemoteThreadExplorer”. If I don’t care about the function but I need to note it’s related to strings and memory allocation I might label it “str_mem”. If I’m super lazy I might name the function “str_mem_??”, and yes you can use “?” in IDA’s function names. This is a reminder that I should probably double check the function if it’s used a lot. Once I have all the functions labeled, I can be confident of the general functionality of the binary. This is when I start digging deeper into the functions.
This can vary but with lots of malware families a handful of the functions contain the majority of the notable functionality. This is commonly where I spend the most of my time reversing. I have said it before in a previous post, that if you aren’t writing then you aren’t reversing. Since I spend lots of time in these functions, I like to have my notes close by. Notes can be added as Function comments but the text disappears once you scroll down the function, plus the text can’t be formatted or the function comments can’t be easily exported and IDA’s Notepad suffers from the same issues (minus the export). Having all the function notes in a single pane and being able to export than to markdown is super helpful. My favorite feature of the plugin is when I scroll from function to function the text refreshes for each function. The plugin can be seen in the right of the following image.
Having a description accessible minimizes the amount of time I have to read code I already reversed, which is useful when opening up old IDBs. I hope others find it as useful as I do.
Here is a link to the repo.
For more information on the Navigation band in IDA check out Igor’s post.
Please leave a comment, ping me on twitter or mastodon or create an issue on GitHub.
Recommended Resources for Learning Cryptography: RE Edition
A common question when first reverse engineering ransomware is “what is a good resource for learning cryptography?”. Having an understanding of cryptography is essential when reversing ransomware. Most reverse engineers need to know how to identify the encryption algorithm, be able to follow the key generation, understand key storage and ensure the encryption implementation isn’t flawed. To accomplish these items it is essential to have a good foundational knowledge of cryptography. The following are some recommendations that I have found beneficial on my path to learning cryptography.
One of the most important skills is having an understanding of how common encryption algorithms work. The best introductory book on cryptography is Understanding Cryptography: A Textbook for Students and Practitioners. It was written in a way that “teaches modern applied cryptography to readers with a technical background but without an education in pure mathematics” (source). The book also covers all modern crypto schemes commonly used. One of the best parts about the book is each chapter has a lecture on YouTube taught by the authors. This format is useful because it reinforces the concepts or adds more details to some of the more difficult topics.
gopep (Go Lang Portable Executable Parser)
gopep (Go Lang Portable Executable Parser) is project I have been working on for learning about Windows Portable Executables (PE) compiled in Go. As most malware analyst have noticed, there has been an uptick in malware (particularly ransomware) compiled in Go. At first glance, reverse engineering Go PE files can be intimidating. The files are commonly over 3MB in size, contains thousands of functions and have a unique calling convention that can return multiple arguments. The first time I opened up an executable in IDA, I was lucky because the plugin IDAGolangHelper was able to identify everything. The second time, I wasn't so lucky. This motivated me to port IDAGolangHelper to IDA 7.5, Python 3, convert the GUI to PyQT and include some code that parsed the Go source code and added the Go function comments to the IDB. After everything was done, my code didn't fix up the IDB. This lead me writing gopep. In IDAGolangHelper defense, the issue was because the hard-coded bytes used to identify Go version had not been updated for a couple of years. I should have checked this first or checked one of the multiple pull requests.
gopep is a Python script that can parse Go compiled PE file without using Go. The script only relies on Pefile. There are similar scripts that are excellent for ELF executables but during my analysis I noticed they threw exceptions when parsing PE files. Below we can see the command line options that gopep currently supports, it can also be used as a class.
C:\Users\null\Documents\repo\gopep>python gopep.py -h
usage: gopep.py [-h] [-c C_DIR] [-e E_FILE] [-x EA_DIR] [-v IN_FILE] [-m MD_FILE] [-t T_FILE] [-ev ET_FILE]
gopep Go Portable Executable Parser
optional arguments:
-h, --help show this help message and exit
-c C_DIR, --cluster C_DIR
cluster directory of files
-e E_FILE, --export E_FILE
export results of file to JSON
-x EA_DIR, --export_all EA_DIR
export results of directory to JSONs
-v IN_FILE, --version IN_FILE
print version
-m MD_FILE, --module-data MD_FILE
print module data details
-t T_FILE, --triage T_FILE
triage file, print interesting attributes
-ev ET_FILE, --everything ET_FILE
print EVERYTHING!
gopep is primarily for exploring structures within PE files compiled in Go but it also supports clustering. The clustering algorithm is similar to import hashing but uses a sets of symbol names and file paths that are unique to executables compiled in Go. As with most executable clustering algorithms, it can be broken by compressing the executable. The clustering can be done by passing a command of -c and a directory of files that should be clustered. I would not recommend clustering to many files using my code. You'd be better off exporting the hashes using the command -x , parsing the JSONs and then querying that way.
The README for the project has more details on the fields parsed, my notes and a great set of references for anyone wanting to read up on what happens when Go compiles an executable.
https://github.com/alexander-hanel/gopep


