TL;DR: Certified Circuits identify the parts of a neural network that truly represent a concept, producing explanations that are more reliable, compact, and robust to changes in the data used to discover them.
Abstract
Understanding how neural networks arrive at their predictions is essential for debugging, auditing, and deployment. Mechanistic interpretability pursues this goal by identifying circuits — minimal subnetworks responsible for specific behaviors. However, existing circuit discovery methods are brittle: circuits depend strongly on the chosen concept dataset and often fail to transfer out-of-distribution, raising doubts whether they capture concept or dataset-specific artifacts. We introduce Certified Circuits, which provide provable stability guarantees for circuit discovery. Our framework wraps any black-box discovery algorithm with randomized data subsampling to certify that circuit component inclusion decisions are invariant to bounded edit-distance perturbations of the concept dataset. Unstable neurons are abstained from, yielding circuits that are more compact and more accurate. On ImageNet and OOD datasets, certified circuits achieve up to 91% higher accuracy while using 45% fewer neurons, and remain reliable where baselines degrade. Certified Circuits puts circuit discovery on formal ground by producing mechanistic explanations that are provably stable and better aligned with the target concept.
BibTeX
@misc{anani2026certifiedcircuitsstabilityguarantees,
    title         = {Certified Circuits: Stability Guarantees for Mechanistic Circuits},
    author        = {Alaa Anani and Tobias Lorenz and Bernt Schiele and Mario Fritz and Jonas Fischer},
    year          = {2026},
    eprint        = {2602.22968},
    archivePrefix = {arXiv},
    primaryClass  = {cs.AI},
    doi           = {10.48550/arXiv.2602.22968},
    url           = {https://arxiv.org/abs/2602.22968}
}