Symbolic Analysis Meets LLMs
DeFiAligner: When DeFi Docs and Code Diverge
In DeFi, documentation often stands in for source code. When the docs say one thing and the deployed contracts do another, users absorb the risk.
DeFiAligner addresses that gap by combining symbolic execution over deployed bytecode with LLM-based reasoning over project documents. The system extracts symbolic expressions for token balance changes and branch conditions, then asks whether the documented behavior and actual contract behavior still line up.
- Symbolic analysis first. SEVM explores on-chain binary code and records the execution states needed to reason about behavior precisely.
- LLMs applied at the right boundary. Rather than replacing analysis, the model compares extracted symbolic behavior against human-facing documentation using structured prompts.
- Strong recall on public datasets. The AFT 2024 paper reports recall rates of 92% and 90% across two public datasets.
Why this matters
This is a good example of where AI is useful because it sits on top of a precise analysis layer, not because it replaces one.
- The biggest failures are often mismatches. A protocol can be exploitable even when the code is internally consistent if the public promise and deployed behavior diverge.
- This is a review workflow, not just a benchmark. The approach is useful for listings review, partner due diligence, and monitoring changes over time.
- Good analysis beats superficial AI wrappers. The symbolic layer gives the model precise semantics to reason over, which is why the overall system works.
Documentation drift is a real security problem, especially in fast-moving DeFi systems. For protocol review support or custom analysis, contact [email protected].