Deflated Sharpe Ratio
The headline Sharpe is a lie when you backtested 10,000 strategies. We deflate every reported Sharpe by the number of trials, autocorrelation, and skew/kurtosis (Bailey & López de Prado, 2014).
DSR = Φ( ((SR − E[SR_max]) · √(N−1)) / √(1 − γ₃·SR + (γ₄−1)/4·SR²) )
▸ STUDENT TAKEAWAY Your students stop being fooled by 3.0+ Sharpe screenshots and learn to ask: "how many trials produced this?"
Combinatorial Purged CV
k-fold cross-validation leaks information through label overlap. CPCV purges training folds of any sample whose label horizon overlaps the test set, with an embargo period to neutralize serial correlation.
φ(t) = { i : [t_i, t_i + h] ∩ [t_test, t_test + h + ε] ≠ ∅ }
▸ STUDENT TAKEAWAY Your students will stop overfitting and ship strategies that survive contact with reality.
Walk-Forward with Anchored Origin
Single backtests are overfit by construction. We use anchored walk-forward: re-estimate every N bars, validate on out-of-sample window, never look forward, never re-tune on the test set.
OOS_t = f( D_{train}^{1..t} ) → score on D_{t+1..t+m}
▸ STUDENT TAKEAWAY A simple rule your students can apply tomorrow: never report a metric that touched the test set.
Probabilistic Sharpe Ratio
Probability that the true Sharpe exceeds a benchmark, given sample length and return moments. Far more honest than a point estimate — especially below 36 months of live data.
PSR(SR*) = Φ( (ŜR − SR*)·√(n−1) / √(1 − γ₃·ŜR + (γ₄−1)/4·ŜR²) )
▸ STUDENT TAKEAWAY Your students learn to say "73% confidence Sharpe > 1" instead of "my Sharpe is 1.4".
Realistic Transaction Costs
Linear slippage + impact-square-root model + bid/ask. Strategy survives a 2× cost-multiplier stress test or it does not deploy. Most retail "edges" die at this step.
c(q) = s/2 + α · σ · √(q / ADV)
▸ STUDENT TAKEAWAY Your students kill bad strategies in 5 lines of code, before risking a dollar.
Bayesian Position Sizing
Fractional Kelly with a posterior over the edge, not a point estimate. Shrinkage toward zero proportional to estimation uncertainty. The result: smaller bets when you know less.
f* = (μ − r) / σ² · E[ posterior ] · k_fractional
▸ STUDENT TAKEAWAY Your students replace gut sizing with a rule that adapts to how confident the data actually allows them to be.