Security engineering is fundamentally different from feature engineering. A developer asks: "does this code do what I intended?" A security researcher asks: "what can this code do that was NOT intended?" These are opposite questions. Switching between them — building with one mindset, reviewing with the other — is the core professional skill this course develops.
Smart contracts have a combination of properties that make security
uniquely demanding compared to traditional software:
Immutability (partial):
Once deployed, the core logic of most contracts cannot be changed.
A critical bug in a non-upgradeable contract is permanent.
In traditional software: patch and deploy. In smart contracts: either
upgrade (introducing new trust assumptions) or start over.
Value density:
A single function in a 200-line contract can hold $500M.
In traditional software, breaching a single file rarely gives
direct access to all application funds.
The attacker-to-defender resource asymmetry is extreme in DeFi.
Adversarial composability:
Smart contracts compose atomically. An attacker can combine multiple
protocols in a single transaction in ways their developers never anticipated.
Flash loans remove capital barriers. No other system lets an attacker
temporarily borrow $1B with zero collateral.
Irreversibility:
Lost funds are gone. There is no bank to call, no fraud protection,
no chargeback. On-chain settlements are final.
Public code:
Every deployed contract's logic can be read. Attackers can study your
code for as long as they want before striking. They have more time to
find vulnerabilities than you had to write the code.
Economic rationality:
Every exploit that is theoretically possible AND profitable WILL be
attempted. The question is not "could this be exploited" but "how long
before someone does?" This is fundamentally different from traditional
security where many vulnerabilities go unexploited for years.Security analysis fails in two directions:
False negative (missed vulnerability):
The code has a bug. The analyst didn't find it.
Cost: potential total loss of protocol funds.
Cause: insufficient depth, unfamiliar attack pattern, time pressure.
False positive (phantom vulnerability):
The code is actually correct. The analyst flagged it as vulnerable.
Cost: wasted developer time, unnecessary code changes, delays.
Cause: pattern matching without full understanding of context.
Both are costly. But false negatives are catastrophic and false positives
are merely annoying. When in doubt: flag, explain your concern, let
the developer prove it's not a vulnerability.Invariant:
A condition that must ALWAYS be true, regardless of how the contract
is called or what state it is in.
Example: "totalSupply must always equal the sum of all balances."
Finding invariants and testing them under adversarial conditions is
the most productive framing for security analysis.
Attack surface:
Every entry point to the contract where untrusted input can be provided.
External functions, fallback, receive, callbacks from token contracts,
data passed via bytes/calldata parameters.
Trust boundary:
The line between what the contract controls and what it must trust.
msg.sender (user), called contracts (external), oracle data (external),
time (block.timestamp, manipulable), randomness (no safe on-chain source).
Attack vector:
A specific path an attacker can take to violate an invariant.
Primitive:
A basic building block: reentrancy, integer overflow, access control failure.
Most exploits combine multiple primitives.
Exploit:
A complete attack that violates an invariant and extracts value.
An exploit is not just a bug — it's a bug + a path to profit.When an attacker analyzes a new contract, they systematically work through:
Step 1: What value can be extracted?
→ What tokens does this contract hold?
→ What actions change balances?
→ What would a fully successful attack look like?
Step 2: What access do I have?
→ Which functions can I call? (external/public functions)
→ Can I call them multiple times? (reentrancy surface)
→ Can I call them as an unexpected entity? (contract calling functions
expected to be called by EOAs only)
Step 3: What can I change?
→ Which state variables are affected by my calls?
→ Which calculations use state I can influence?
→ Which external calls does the contract make, and can I control them?
Step 4: What assumptions can I violate?
→ Does the contract assume msg.sender is an EOA? (can be bypassed)
→ Does it assume prices are current? (oracle manipulation)
→ Does it assume sequential execution? (reentrancy)
→ Does it assume the token behaves as ERC-20? (fee-on-transfer, ERC-777)
Step 5: What constraints exist?
→ Which conditions would prevent the attack?
→ Can those conditions be circumvented? (flash loans, sequence manipulation)
→ Is there a time constraint? (block timestamp, cooldown)Rather than asking "what bugs might exist?", professional auditors ask
"what invariants must this contract maintain?" Then systematically try to
break them.
Finding invariants for a lending protocol:
1. totalDebt ≤ totalCollateralValue * collateralRatio (solvency)
2. No user can borrow more than their collateral allows (per-user solvency)
3. liquidations can only happen when HF < 1.0 (correct liquidation trigger)
4. Interest accrual increases the index monotonically (no deflation)
5. Total shares minted = Σ(user shares) (accounting integrity)
6. Protocol reserves can only increase from fees, not decrease (no drain)
For each invariant, the auditor asks:
→ Under what sequence of calls could this be violated?
→ What external factor (price, time, external contract) could cause this?
→ Is there any single-transaction path to violating this?
This approach is:
→ More systematic than "look for known bugs"
→ More testable (invariants become fuzz test assertions)
→ More communicable (invariants go in documentation and tests)Before starting any analysis, internalize these principles: Assume every function will be called by an adversarial contract, not an EOA. Assume token transfers can have callbacks. Assume prices can be manipulated within a single block. Assume flash loans are available for any amount. Assume the maximum possible gas is available. Assume the contract will receive funds in unexpected ways. Assume the most economically rational action will be taken. Assume any temporary state can be "frozen" by blocking transactions. Assume any publicly readable state is known to attackers. Assume any off-chain computation can be replicated by an attacker.
Security research requires a fundamentally different mindset than development. The core framework: identify invariants (what must always be true), identify the attack surface (where untrusted input enters), and systematically search for paths that violate invariants through the attack surface. The adversarial mindset combined with invariant-based thinking is the professional foundation for all techniques taught in this course.
Module 1: SECURITY MINDSET & THREAT MODELING
Thinking Like an Attacker to Build Like a Defender
Major events are on Discord
Join for live sessions, announcements, and event rooms while you learn.
Join Discord →