System Prompt Testing Methodology
These notes are part of my experiment in "learning in public" through a semi-automated Zettelkasten. Each note is atomic (containing one core idea), heavily interconnected, and designed to evolve as my understanding deepens. I'll continue to share notes that can benefit developers, researchers, or anyone curious about systematic knowledge management and technical methodologies.
This first note tackles AI system prompt testing, but not the "did it give the right answer" kind. Traditional frameworks already handle that. Instead, this methodology tests whether an AI maintains its boundaries when someone tries to break them.
AI systems face unique attack vectors. "Ignore previous instructions" shouldn't work, yet variations slip through. Security researchers keep rediscovering the same vulnerabilities because we lack systematic approaches to behavioral testing.
The methodology covers four core dimensions: behavioral consistency, boundary enforcement, adversarial stress testing, and context degradation. Each includes concrete attack patterns—everything from simple role confusion to sophisticated prompt injections hidden in code comments.