Hooked on Mnemonics Worked for Me

Stressing LLMs - Triage Stage

Packers, cryptors, and code obfuscation are all methods used to bypass signature-based scanners in AV/EDR or to slow down the reverse engineering process. Many people are now using Large Language Models (LLMs) to reverse engineer or thwart these protections. It is increasingly common to see examples of frontier models solving CTF challenges or being used to port old video games to modern code. It is somewhat morbidly fascinating to consider how LLMs could drive an arms race with DRM systems.

When thinking about LLMs for reverse engineering, I keep asking: at what point does randomization degrade tokenization or code comprehension? This is a reasonable question in the context of compiled executables.

In my view, there are two types of potential attacks against LLMs in the context of static analysis of compiled binaries. The first is making the code so complex that the context size and token cost are no longer practical. The second, which I call “Tokenization Inflation,” attempts to inflate or fragment tokens to increase processing cost or reduce coherence. These “attacks” may not even be effective against LLMs, especially for trivial tasks, but they are still worth exploring. This is the first of two posts: this one outlines the approaches and code; the second tests the hypothesis.

Complexity Attack

The complexity attack increases computational complexity by generating binaries with a large number of interdependent functions. Instead of hiding the logic, the goal is to make the amount of state and code too large for practical static reasoning. The executable contains a toy XOR cipher with a keystream derived from a set of N functions, where N is the number of generated rounds. A Python script generates C source code with an embedded encrypted string and decryption loop. GCC is then used to compile the C source into an executable. At runtime, the decrypted string is printed to the console. To make this concrete, we can walk through generating the code and compiling it.

python gen_fixture.py generate --seed 0xdeadbeef --rounds 5 --symbol-len 16 --symbol-pad 0 --message "Hello, World" --out fixture.c
Wrote fixture.c
Generated symbol prefix length: 16
Compile with:
gcc -O0 -g3 -gdwarf-5 -fno-omit-frame-pointer -fno-inline -std=c11 fixture.c -o fixture.exe

Here is fixture.c. There are 5 functions named TokenizerBench, which matches the number of rounds specified on the command line. If this were increased to 16,397 rounds, the generated binary would contain 16,397 functions.

// Generated CTF-style static-analysis fixture
//
// Suggested build:
//   gcc -O0 -g3 -gdwarf-5 -fno-omit-frame-pointer -fno-inline -std=c11 fixture.c -o fixture.exe
//
// seed=0xdeadbeef
// rounds=5
// const_mode=rand
// const_seed=0xc001d00d
// symbol_len=16
// symbol_pad=0
// generated_prefix_length=16
//
// Notes:
// - Per-function constants are baked into each generated function.
// - Function bodies vary by generated round variant.
// - The plaintext is stored encrypted in the binary and decrypted at runtime.

#include <stdint.h>
#include <stdio.h>
#include <stddef.h>

typedef struct TokenizerBench___Type__LongRecord__With__Lots__Of__Nested__Like__Tokens {
    uint64_t a;
    uint64_t b;
    uint64_t c;
} TokenizerBench___Type__LongRecord__With__Lots__Of__Nested__Like__Tokens;

static uint32_t xorshift32(uint32_t x) {
    x ^= x << 13;
    x ^= x >> 17;
    x ^= x << 5;
    return x;
}

__attribute__((used, noinline))
uint32_t TokenizerBench___R0(TokenizerBench___Type__LongRecord__With__Lots__Of__Nested__Like__Tokens *p) {
    uint32_t m1 = xorshift32(0x9336956du ^ 0x31bbf978u ^ (uint32_t)p->a);
    uint32_t m2 = xorshift32(0xcd6f55fcu ^ (uint32_t)p->b);
    p->a ^= ((uint64_t)m1 << 32) | (uint64_t)m2;
    p->b += (uint64_t)(0x9336956du ^ m2);
    p->b = (p->b << 10) | (p->b >> 54);
    p->c = (p->c + p->a) ^ (uint64_t)(0x31bbf978u ^ 0xcd6f55fcu);
    uint64_t r = p->a ^ p->b ^ p->c ^ (uint64_t)0x9336956du ^ (uint64_t)0x31bbf978u ^ (uint64_t)0xcd6f55fcu;
    return (uint32_t)(r ^ (r >> 32));
}

__attribute__((used, noinline))
uint32_t TokenizerBench___R1(TokenizerBench___Type__LongRecord__With__Lots__Of__Nested__Like__Tokens *p) {
    uint32_t m = xorshift32(0x366856bbu ^ (uint32_t)p->a);
    p->a ^= ((uint64_t)0x366856bbu << 32) | (uint64_t)m;
    p->b += p->a ^ (p->c + (uint64_t)0x72fcd409u);
    p->c = ((p->c ^ (uint64_t)0x3afd4cabu) << 24) | ((p->c ^ (uint64_t)0x3afd4cabu) >> 40);
    uint64_t r = p->a ^ p->b ^ p->c ^ (uint64_t)0x366856bbu ^ (uint64_t)0x72fcd409u ^ (uint64_t)0x3afd4cabu;
    return (uint32_t)(r ^ (r >> 32));
}

__attribute__((used, noinline))
uint32_t TokenizerBench___R2(TokenizerBench___Type__LongRecord__With__Lots__Of__Nested__Like__Tokens *p) {
    uint32_t m = xorshift32(0x046d6ad2u ^ (uint32_t)p->b);
    p->b ^= ((uint64_t)m << 32) | (uint64_t)0xc719f452u;
    p->c += p->b ^ (uint64_t)0x0fc1bdd9u;
    p->a = (p->a + (uint64_t)0x046d6ad2u);
    p->a = (p->a >> 21) | (p->a << 43);
    uint64_t r = p->a ^ p->b ^ p->c ^ (uint64_t)0xc719f452u ^ (uint64_t)0x046d6ad2u ^ (uint64_t)0x0fc1bdd9u;
    return (uint32_t)(r ^ (r >> 32));
}

__attribute__((used, noinline))
uint32_t TokenizerBench___R3(TokenizerBench___Type__LongRecord__With__Lots__Of__Nested__Like__Tokens *p) {
    uint32_t m = xorshift32(0xc55b15eeu + (uint32_t)p->c);
    p->a += ((uint64_t)m << 32) | (uint64_t)0x0d11e683u;
    p->c ^= p->a;
    p->c = (p->c >> 16) | (p->c << 48);
    p->b ^= (uint64_t)(0xc8e57b40u + m);
    uint64_t r = p->a ^ p->b ^ p->c ^ (uint64_t)0xc8e57b40u ^ (uint64_t)0x0d11e683u ^ (uint64_t)0xc55b15eeu;
    return (uint32_t)(r ^ (r >> 32));
}

__attribute__((used, noinline))
uint32_t TokenizerBench___R4(TokenizerBench___Type__LongRecord__With__Lots__Of__Nested__Like__Tokens *p) {
    uint32_t m1 = xorshift32(0xdaf09eaeu ^ 0xf6f1f787u ^ (uint32_t)p->a);
    uint32_t m2 = xorshift32(0xe0cf500du ^ (uint32_t)p->b);
    p->a ^= ((uint64_t)m1 << 32) | (uint64_t)m2;
    p->b += (uint64_t)(0xdaf09eaeu ^ m2);
    p->b = (p->b << 21) | (p->b >> 43);
    p->c = (p->c + p->a) ^ (uint64_t)(0xf6f1f787u ^ 0xe0cf500du);
    uint64_t r = p->a ^ p->b ^ p->c ^ (uint64_t)0xdaf09eaeu ^ (uint64_t)0xf6f1f787u ^ (uint64_t)0xe0cf500du;
    return (uint32_t)(r ^ (r >> 32));
}

__attribute__((used, noinline))
uint32_t derive_state(uint32_t seed) {
    TokenizerBench___Type__LongRecord__With__Lots__Of__Nested__Like__Tokens x = {
        seed,
        seed ^ 0x12345678ULL,
        seed + 0x9ULL
    };

    uint32_t s = seed;
    s ^= TokenizerBench___R0(&x);
    s ^= TokenizerBench___R1(&x);
    s ^= TokenizerBench___R2(&x);
    s ^= TokenizerBench___R3(&x);
    s ^= TokenizerBench___R4(&x);

    s = xorshift32(s);
    return s;
}

int main(void) {
    uint8_t encrypted[] = { 0xcf, 0x7a, 0xe5, 0x10, 0x3c, 0x49, 0xe6, 0x0b, 0x79, 0xcb, 0xf9, 0x3d, 0x00 };
    uint32_t s = derive_state(0xdeadbeef);

    for (size_t i = 0; i < sizeof(encrypted) - 1; i++) {
        s = xorshift32(s + 0xA5A5A5A5u);
        encrypted[i] ^= (uint8_t)(s & 0xffu);
    }

    puts((const char *)encrypted);
    return 0;
}

Below is the creation and execution of a 100,000-round binary.

python gen_fixture.py generate --seed 0xdeadbeef --rounds 100000 --message "Hello, World" --out fixture-100k.c --symbol-len 16 --symbol-pad 0
Wrote fixture-100k.c
Generated symbol prefix length: 16
Compile with:
gcc -O0 -g3 -gdwarf-5 -fno-omit-frame-pointer -fno-inline -std=c11 fixture-100k.c -o fixture-100k.exe

gcc -O0 -g3 -gdwarf-5 -fno-omit-frame-pointer -fno-inline -std=c11 fixture-100k.c -o fixture-100k.exe
.\fixture-100k.exe
Hello, World

The 100k-function binary was over 55 MB. Dynamic analysis could bypass this obfuscation with a single breakpoint, but the focus here is static analysis. The interesting part is that the number of functions scales easily for testing, and each function contributes to the final state. If the analysis is incomplete or incorrect, the derived decryption key will also be incorrect.

Tokenization Inflation

Once a prompt is sent to an LLM, it is tokenized into integers. A simple way to think about this is mapping chunks of text to IDs. These IDs are then used to index into the model’s embedding table. This may seem similar to compression algorithms, since both map variable-length sequences to codes. The difference is that tokenization uses a fixed vocabulary optimized for model performance, while compression builds or applies dictionaries to reduce size by exploiting repetition in the data.

A potential weakness in both compression and tokenization is that long inputs increase computational cost. Repetitive or structured data can also affect how efficiently it is represented as tokens. Most modern implementations handle this reasonably well, but there is still a cost.

In an executable, one of the most common ways to introduce large amounts of data is through strings. However, not all strings are surfaced or prioritized during analysis. One type of string that is often preserved and exposed is debug information. With GCC, DWARF debug metadata can be used to store extremely long function names. We can generate function names of arbitrary length using the Python script. By passing -g3 -gdwarf-5, GCC emits DWARF metadata. Disassemblers such as Binary Ninja, Ghidra, and IDA can read this metadata, recover the names, and in some workflows pass them along to an LLM, which then tokenizes the text. The following command generates 5 rounds with a function name length of 7,331 characters.

python gen_fixture.py generate --seed 0xdeadbeef --rounds 5 --message "Hello, World" --out fixture-p.c --symbol-len 7331 --symbol-pad 1337
Wrote fixture-p.c
Generated symbol prefix length: 8672
Compile with:
gcc -O0 -g3 -gdwarf-5 -fno-omit-frame-pointer -fno-inline -std=c11 fixture-p.c -o fixture-p.exe

gcc -O0 -g3 -gdwarf-5 -fno-omit-frame-pointer -fno-inline -std=c11 fixture-p.c -o fixture-p.exe

Below is a screenshot of a graph view in IDA. It gives a sense of how long the function names are, although they are truncated after 1024 characters in IDA.

Here is an example of a complete function name.

uint32_t __cdecl TokenizerBench__DemangleLike__std__basic_string__char__std__char_traits__char__std__allocator__char__vector__pair__basic_string__int__DemangleLike__std__basic_string__char__std__char_traits__char__std__allocator__char__vector__pair__basic_string__int__DemangleLike__std__basic_string__char__std__char_traits__char__std__allocator__char__vector__pair__basic_string__int__DemangleLike__std__basic_string__char__std__char_traits__char__std__allocator__char__vector__pair__basic_string__int__DemangleLike__std__basic_string__char__std__char_traits__char__std__allocator__char__vector__pair__basic_string__int__DemangleLike__std__basic_string__char__std__char_traits__char__std__allocator__char__vector__pair__basic_string__int__DemangleLike__std__basic_string__char__std__char_traits__char__std__allocator__char__vector__pair__basic_string__int__DemangleLike__std__basic_string__char__std__char_traits__char__std__allocator__char__vector__pair__basic_string__int__DemangleLike__std__basic_string__char__std__char_traits__char___0(TokenizerBench__DemangleLike__std__basic_string__char__std__char_traits__char__std__allocator__char__vector__pair__basic_string__int__DemangleLike__std__basic_string__char__std__char_traits__char__std__allocator__char__vector__pair__basic_string__int__DemangleLike__std__basic_string__char__std__char_traits__char__std__allocator__char__vector__pair__basic_string__int__DemangleLike__std__basic_string__char__std__char_traits__char__std__allocator__char__vector__pair__basic_string__int__DemangleLike__std__basic_string__char__std__char_traits__char__std__allocator__char__vector__pair__basic_string__int__DemangleLike__std__basic_string__char__std__char_traits__char__std__allocator__char__vector__pair__basic_string__int__DemangleLike__std__basic_string__char__std__char_traits__char__std__allocator__char__vector__pair__basic_string__int__DemangleLike__std__basic_string__char__std__char_traits__char__std__allocator__char__vector__pair__basic_string__int__DemangleLike__std__basic_string__char__std__char_traits__char__std__allocator__char__vector__pair__basic_string__int__DemangleLike__std__basic_string__char__std__char_traits__char__std__allocator__char__vector__pair__basic_string__int__DemangleLike__std__basic_string__char__std__char_traits__char__std__allocator__char__vector__pair__basic_string__int__DemangleLike__std__basic_string__char__std__char_traits__char__std__allocator__char__vector__pair__basic_string__int__DemangleLike__std__basic_string__char__std__char_traits__char__std__allocator__char__vector__pair__basic_string__int__DemangleLike__std__basic_string__char__std__char_traits__char__std__allocator__char__vector__pair__basic_string__int__DemangleLike__std__basic_string__char__std__char_traits__char__std__allocator__char__vector__pair__basic_string__int__DemangleLike__std__basic_string__char__std__char_traits__char__std__allocator__char__vector__pair__basic_string__int__DemangleLike__std__basic_string__char__std__char_traits__char__std__allocator__char__vector__pair__basic_string__int__DemangleLike__std__basic_string__char__std__char_traits__char__std__allocator__char__vector__pair__basic_string__int__DemangleLike__std__basic_string__char__std__char_traits__char__std__allocator__char__vector__pair__basic_string__int__DemangleLike__std__basic_string__char__std__char_traits__char__std__allocator__char__vector__pair__basic_string__int__DemangleLike__std__basic_string__char__std__char_traits__char__std__allocator__char__vector__pair__basic_string__int__DemangleLike__std__basic_string__char__std__char_traits__char__std__allocator__char__vector__pair__basic_string__int__DemangleLike__std__basic_string__char__std__char_traits__char__std__allocator__char__vector__pair__basic_string__int__DemangleLike__std__basic_string__char__std__char_traits__char__std__allocator__char__vector__pair__basic_string__int__DemangleLike__std__basic_string__char__std__char_traits__char__std__allocator__char__vector__pair__basic_string__int__DemangleLike__std__basic_string__char__std__char_traits__char__std__allocator__char__vector__pair__basic_string__int__DemangleLike__std__basic_string__char__std__char_traits__char__std__allocator__char__vector__pair__basic_string__int__DemangleLike__std__basic_string__char__std__char_traits__char__std__allocator__char__vector__pair__basic_string__int__DemangleLike__std__basic_string__char__std__char_traits__char__std__allocator__char__vector__pair__basic_string__int__DemangleLike__std__basic_string__char__std__char_traits__char__std__allocator__char__vector__pair__basic_string__int__DemangleLike__std__basic_string__char__std__char_traits__char__std__allocator__char__vector__pair__basic_string__int__DemangleLike__std__basic_string__char__std__char_traits__char__std__allocator__char__vector__pair__basic_string__int__DemangleLike__std__basic_string__char__std__char_traits__char__std__allocator__char__vector__pair__basic_string__int__DemangleLike__std__basic_string__char__std__char_traits__char__std__allocator__char__vector__pair__basic_string__int__DemangleLike__std__basic_string__char__std__char_traits__char__std__allocator__char__vector__pair__basic_string__int__DemangleLike__std__basic_string__char__std__char_traits__char__std__allocator__char__vector__pair__basic_string__int__DemangleLike__std__basic_string__char__std__char_traits__char__std__allocator__char__vector__pair__basic_string__int__DemangleLike__std__basic_string__char__std__char_traits__char__std__allocator__char__vector__pair__basic_string__int__DemangleLike__std__basic_string__char__std__char_traits__char__std__allocator__char__vector__pair__basic_string__int__DemangleLike__std__basic_string__char__std__char_traits__char__std__allocator__char__vector__pair__basic_string__int__DemangleLike__std__basic_string__char__std__char_traits__char__std__allocator__char__vector__pair__basic_string__int__DemangleLike__std__basic_string__char__std__char_traits__char__std__allocator__char__vector__pair__basic_string__int__DemangleLike__std__basic_string__char__std__char_traits__char__std__allocator__char__vector__pair__basic_string__int__DemangleLike__std__basic_string__char__std__char_traits__char__std__allocator__char__vector__pair__basic_string__int__DemangleLike__std__basic_string__char__std__char_traits__char__std__allocator__char__vector__pair__basic_string__int__DemangleLike__std__basic_string__char__std__char_traits__char__std__allocator__char__vector__pair__basic_string__int__DemangleLike__std__basic_string__char__std__char_traits__char__std__allocator__char__vector__pair__basic_string__int__DemangleLike__std__basic_string__char__std__char_traits__char__std__allocator__char__vector__pair__basic_string__int__DemangleLike__std__basic_string__char__std__char_traits__char__std__allocator__char__vector__pair__basic_string__int__DemangleLike__std__basic_string__char__std__char_traits__char__std__allocator__char__vector__pair__basic_string__int__DemangleLike__std__basic_string__char__std__char_traits__char__std__allocator__char__vector__pair__basic_string__int__DemangleLike__std__basic_string__char__std__char_traits__char__std__allocator__char__vector__pair__basic_string__int__DemangleLike__std__basic_string__char__std__char_traits__char__std__allocator__char__vector__pair__basic_string__int__DemangleLike__std__basic_string__char__std__char_traits__char__std__allocator__char__vector__pair__basic_string__int__DemangleLike__std__basic_string__char__std__char_traits__char__std__allocator__char__vector__pair__basic_string__int__DemangleLike__std__basic_string__char__std__char_traits__char__std__allocator__char__vector__pair__basic_string__int__DemangleLike__std__basic_string__char__std__char_traits__char__std__allocator__char__vector__pair__basic_string__int__DemangleLike__std__basic_string__char__std__char_traits__char__std__allocator__char__vector__pair__basic_string__int__DemangleLike__std__basic_string__char__std__char_traits__char__std__allocator__char__vector__pair__basic_string__int__DemangleLike__std__basic_string__char__std__char_traits__char__std__allocator__char__vector__pair__basic_string__int__DemangleLike__std__basic_string__char__std__char_traits__char__std__allocator__char__vector__pair__basic_string__int__DemangleLike__std__basic_string__char__std__char_traits__char__std__allocator__char__vector__pair__basic_string__int__PadXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX_Type__LongRecord__With__Lots__Of__Nested__Like__Tokens_0 *p) 
Combining the code complexity option with the large function names makes it so a large corpus of similar strings would be generated in a single function which might be taxing on a tokenizer. This attack can be easily defeated by not loading the debug/dwarf strings in the disassembler.

Summary

The goal is not to make binaries impossible to reverse, but to push LLM-based analysis into inefficient paths. One approach scales interdependent functions to force complexity. The other inflates token-heavy inputs through debug metadata to stress context limits and attention costs. These are better understood as attempts to trigger worst-case behavior in the analysis pipeline, not attacks on tokenization itself. This highlights a shift in where the pressure points are within LLMs. Context windows, token budgets, and attention scaling become part of the attack surface. If LLMs are used in reverse engineering workflows, understanding where they degrade may matter as much as improving their capability.

The next step is validating whether these ideas actually hold up in practice. That means testing them in a way in which I don't go broke with token cost and/or get banned by Anthropic or OpenAI. Odds are my first attempts will be locally using resources referenced in this gist.

Feel free to send me an email if you have any ideas at alexander dot hanel at gmail dot com.

Here is the source code: https://github.com/alexander-hanel/StressingLLMs

Codex’s Model Interaction & Inter-process Communication

Over the weekend I explored OpenAI’s Codex source code using Codex with the goal of understanding how it sends, receives, and processes responses from the API. Here is a link to the report. While going through it, I started thinking about inter-process communication (IPC) between Codex and other processes. In the coding agents I’m familiar with (Anthropic’s Claude Code and OpenAI’s Codex), there isn’t much support for receiving input from external processes, which raises a question I’ve had for a while: how does a third-party process communicate with the agent in a meaningful way? For example, if a command is blocked by a security provider like an EDR, the agent could simply generate a slightly modified version of that command and try again. But how would it know the block was due to a security event and shouldn’t be retried?

To explore this, I had Cursor modify its source to add IPC. The forked vibe coded version compiles and connects to OpenAI’s API just like a normal Codex install. It has a local file-based IPC surface so secondary processes can discover active sessions and submit feedback which gets added to the models context. While testing with multiple Codex instances, I accidentally ran a command to update README.md in the wrong terminal. Earlier, I had injected an “external security control” signal via a Python script, and the session responded with: 

“No. README.md was not updated. The edit attempt on README.md was blocked by an external security control, and the runtime indicated not to retry until that condition is cleared.” 

At first it didn’t make sense, then it registered that the previous tests had worked and the session context had been updated through the IPC channel. I thought it was fascinating because it opened a whole almost philosophical question: how would AI-Agents safely evaluate prompts from remote processes in the context of its current task? It shows how much trust matters in the context of AI-Agents. 

The README.md of the project is constructed as a learning guide. Enjoy. 

Agentic AI Security: Reviewing the Past to Predict the Future

OpenAI recently posted a role for a Cybersecurity Landscape Analyst within their Intelligence and Investigation team. One line stood out:

“Develop forward-looking assessments of how cyber threats may evolve over 6–24 months.”

To predict the future of Agentic AI, we only need to look to the past. Agentic AI security is not emerging from nothing. It is replaying the same history as traditional computing security, but within a compressed timeline.

As of this writing, prompt injection is a commonly discussed attack vector against LLM-based systems. At its core, prompt injection exists because LLMs are sequence predictors with no native separation between trusted control instructions (system prompts) and untrusted input (user data). This is not a new problem. This is basically Intel x86 in Real Mode.

In Real Mode, code, data, the stack, and even the interrupt vector table all share the same memory space. There is no privilege separation. Any instruction can jump anywhere, overwrite anything, and execute without restriction. The fundamental issue is identical: no boundary between control and data. Detection strategies in that era relied on pattern matching, heuristics, checksums, and runtime hooking. Modern defenses against prompt injection, such as guardrails, input filtering, and heuristic detection, are not that different. They are variations of the same reactive strategies used before architectural fixes existed.

What about forward-looking cyber threats like the first Agentic-AI worm? For this example, we could  consider the Morris Worm in 1988. Its success was not due to a single vulnerability, but an environment characterized by high trust between systems, widespread exposure of network services, weak authentication mechanisms, and a highly connected user base.

Now map this to Agentic AI. Instead of network services like sendmail, finger, or rsh, we have tool-enabled agents such as OpenClaw. Instead of academic researchers, we have early adopters rapidly integrating these systems into real workflows. Instead of BSD Unix systems in academic environments, we have Mac Minis showing up in homes and offices because people want to run OpenClaw locally. Instead of executable payloads, we have prompts. The conditions for a worm are the same: trust, connectivity, and execution capability. What is currently missing is density. There are not yet enough interconnected, tool-enabled systems for large-scale, worm-like propagation comparable to the Morris Worm or Slammer

My theory is that the same threats, along with the security mitigations developed to address them since the 60s and 70s, will replay themselves within the microcosm of Agentic AI. We are currently in DOS Mode for Agentic AI. 

Update: A colleague shared the following link 

https://arxiv.org/abs/2403.02817


 

 


LLMs != Security Products

Cybersecurity stocks took a dive after Anthropic released a blog post titled “Making frontier cybersecurity capabilities available to defenders" What stood out was not the post itself, but the market reaction. Companies tied to endpoint protection, cloud security, and other traditional cybersecurity products were affected, even though the post had little direct relevance to those companies.

That reaction highlights a disconnect between the perceived capabilities of “AI” and its actual impact on cybersecurity products, a disconnect that likely extends beyond the market. To make sense of that gap, it helps to start with what is actually meant by ‘AI’ in this context. Usage of the term AI (short for Artificial Intelligence) has increased sharply since the release of ChatGPT in November of 2022. In practice, much of what is labeled “AI” today is better described as large language models (LLMs). For readers unfamiliar with LLMs, a common definition is:

“A large language model (LLM) is a type of artificial intelligence that can understand and create human language. These models learn by studying huge amounts of text from books, websites, and other sources.”

What makes LLMs fascinating and applicable to our modern life is how they solved (on a surface level) a field of AI called Natural Language Processing (NLP). For readers not familiar with NLP, autocomplete, email spam filters and auto-correct are all examples of NLP. Here is a definition of NLP.  

“A field in Artificial Intelligence, and also related to linguistics, focused on enabling computers to understand and generate human language.”

Long-time readers of this blog may recall that I previously used a sub-field of NLP, Natural Language Generation (NLG) to automatically create descriptions of disassembled functions via API calls. On their own, LLMs require text for both training and inference. They are not autonomous systems;  without prompts, they do not function. This distinction is important when discussing AI and cybersecurity, because evaluating or classifying security events requires context that does not exist as text as input to a prompt. That context has to be generated by additional software.

Generating the context requires an understanding and access to the complete lifecycle of the security event that is being used for the context. Walking through this lifecycle matters because it highlights how much logic exists before an event ever becomes text.

A classic example of a security event is a process initiating an outbound network connection directly to an IP address. How that event is handled varies widely depending on the type of security product and where it operates in the OSI model. For this example, assume the product operates at Layer 7, the application layer. The event pipeline in this case includes several distinct steps. A kernel-mode driver or user-mode component monitors process creation and relevant networking APIs. The destination IP address is evaluated to ensure it is not local, then serialized into text and logged. That log data is subsequently forwarded to a file-based or cloud-based centralized logging system. Even this simplified path omits important actions such as blocking the connection or terminating the process. Writing code is not the same as building a security product, and LLMs do not possess the authority or signal access required to determine whether an IP address is benign or malicious. An LLM can describe an alert very well; it cannot, on its own, determine whether that alert represents malicious behavior without pre-existing detection logic, telemetry, or intelligence-derived indicators of compromise.

In practice, an agent is an LLM placed inside a loop, where it can inspect the current state of a system, run tools or commands, observe the results, and decide what to do next until it reaches some stopping point. Without the output of those tools and commands, the LLM provides no value; it has nothing to reason over. The surrounding software is what produces the text that gives the model context.

As of this publication date, LLMs are not going to replace cybersecurity products. These systems are large, long-lived codebases, and their value is not defined by code generation alone. What matters is the telemetry collected and the logic built on top of that telemetry to determine whether the text describing an event represents something benign or something malicious. Large language models can help explain security events, but they don’t replace the systems that detect them, and confusing the two is how markets end up reacting to the wrong things.

msdocsviewer

Hello, 

I forgot to post a recent IDAPython plugin that I created for viewing Microsoft SDK documentation in IDA. Here is an example screenshot of msdocsviewer .


 The repository for the plugin can be found here.

Function Trapper Keeper - An IDA Plugin For Function Notes

Function Trapper Keeper is an IDA plugin for writing and storing function notes in IDBs, it’s a middle ground between function comments and IDA’s Notepad. It’s a tool that I have wanted for a while. To understand why I wanted Function Trapper Keeper, it might be worth describing my process of reverse engineering a binary in IDA. 

Upon opening a binary, I always take note of the code to data ratio. This is can be inferred by looking at the Navigator band in IDA. If there is more data than code in the binary, it can hint that the binary is packed or encrypted. If so, I typically stop the triage of the binary to start searching for cross-references to the data. In many instances the cross-references can lead to code used for decompressing or decrypting the data. For example, if the binary is a loader it would contain the second stage payload encrypted or some other form of obfuscation. By cross-referencing the data and finding the decryption routine of the loader, I can quickly pivot to extracting the payload. Another notable ratio is if the data or code is not consistent. If the code changes from data to code and back, it is likely that the analysis process of IDA found inconsistencies in the disassembled functions. This could be from anti-disassemblers, flawed memory dumps or something else that needs attention. After the ratios, I look at the strings. I look for the presence of compilers strings, strings related to DLLs and APIs, user defined strings or the lack of user defined strings. If the latter, I’ll start searching for the presence of encrypted strings and then cross-referencing their usage. This can help find the function responsible for string decryption. If I can’t find the string decryption routine, I’ll use some automation to find all references to XOR instructions. After reviewing strings, I’ll do a quick triage of imported function. I like to look for sets of APIs that I know are related to certain functionality. For example, if I see calls to VirtualAlloc, VirtualProtect and CreateRemoteThread, I can infer that process injection is potentially present in the binary.

After the previously described triage, I have high-level overview of the binary and usually know if I should do a deep dive of the binary or if I need to focus on certain functionality (encrypted strings, unpacked, etc). If I’m doing a deep dive I like to label all functions. For my IDBs, the name of the function hints at my level of understanding of the function. The more descriptive the function name, the more I know about it. If I know the function does process injection into explorer.exe I might name it “inject_VirtRemoteThreadExplorer”. If I don’t care about the function but I need to note it’s related to strings and memory allocation I might label it “str_mem”. If I’m super lazy I might name the function “str_mem_??”, and yes you can use “?” in IDA’s function names. This is a reminder that I should probably double check the function if it’s used a lot. Once I have all the functions labeled, I can be confident of the general functionality of the binary. This is when I start digging deeper into the functions.

This can vary but with lots of malware families a handful of the functions contain the majority of the notable functionality. This is commonly where I spend the most of my time reversing. I have said it before in a previous post, that if you aren’t writing then you aren’t reversing. Since I spend lots of time in these functions, I like to have my notes close by. Notes can be added as Function comments but the text disappears once you scroll down the function, plus the text can’t be formatted or the function comments can’t be easily exported and IDA’s Notepad suffers from the same issues (minus the export). Having all the function notes in a single pane and being able to export than to markdown is super helpful. My favorite feature of the plugin is when I scroll from function to function the text refreshes for each function.  The plugin can be seen in the right of the following image. 

Having a description accessible minimizes the amount of time I have to read code I already reversed, which is useful when opening up old IDBs. I hope others find it as useful as I do. 

Here is a link to the repo.

For more information on the Navigation band in IDA check out Igor’s post.

Please leave a comment, ping me on twitter or mastodon or create an issue on GitHub

Recommended Resources for Learning Cryptography: RE Edition

A common question when first reverse engineering ransomware is “what is a good resource for learning cryptography?”. Having an understanding of cryptography is essential when reversing ransomware. Most reverse engineers need to know how to identify the encryption algorithm, be able to follow the key generation, understand key storage and ensure the encryption implementation isn’t flawed. To accomplish these items it is essential to have a good foundational knowledge of cryptography. The following are some recommendations that I have found beneficial on my path to learning cryptography.  

One of the most important skills is having an understanding of how common encryption algorithms work. The best introductory book on cryptography is Understanding Cryptography: A Textbook for Students and Practitioners. It was written in a way that “teaches modern applied cryptography to readers with a technical background but without an education in pure mathematics” (source). The book also covers all modern crypto schemes commonly used. One of the best parts about the book is each chapter has a lecture on YouTube taught by the authors. This format is useful because it reinforces the concepts or adds more details to some of the more difficult topics.

After Understanding Cryptography I’d recommend a non-textbook approach using the cryptopals crypto challenges. It is basically a set of problems that progressively get harder.  You can solve the problems using a programming language of your choice. I have yet to complete the challenges but I’d recommend attempting and solving the first two sets of problems. They introduce you to a lot of foundational concepts that can actually be applied. From what I learned in the first set, I was able to easily crack XOR encrypted executable payloads. I love cryptopals so much that I created a mirror of the site and converted it to markdown so I can easily download everything via git. 

Once a foundational knowledge of cryptography has been established it is useful to see how the algorithms look when compiled. I came across this while I was reversing a family of ransomware and couldn’t correctly decrypt the data. I was able to recover the private RSA key, decrypt the AES key encrypted with the RSA private key and decrypt files using AES in CTR but the after a certain amount of decrypted bytes the data would be corrupted. In response to this I continuously reversed the code, studied AES and all it’s different modes, compiled multiple versions of AES, opened them up in a disassembler and diffed the results but the data was still corrupted. Everything pointed to AES in CTR, eventually I identified that the CTR loop had a off-by-one error and it didn’t matter because (as a colleague pointed out) they also stored the extra byte of the key. It was only when I accounted for the off-by-one error in my decryptor that I was able to successfully decrypt files.  

After this incident whenever I come across a new encryption algorithm that I don’t understand or want to learn more about; I search for references, search for source code, add them to README.md, compile the executables and upload the .exes along with the PDB to a repository named asm-examples. I find the exploration of the disassembled code along with symbols and names from the PDB to be valuable. It aids in being able to quickly identify encryption algorithms and makes the disassembled or decompiled code less intimidating. 

To recap, my goto resources for learning encryption are Understanding Cryptography: A Textbook for Students and Practitioners, cryptopals and comparing compiled binaries to the source code. This isn’t the most in-depth approach to learning cryptography but for supporting malware analysis and reverse engineering ransomware it works well.