ADP Analysis: How Average Draft Position Data Informs Value Decisions
Average Draft Position — the single number that follows every fantasy player like a shadow — is simultaneously one of the most useful signals in draft preparation and one of the most misread. This page examines what ADP actually measures, how it gets constructed, what moves it, where it misleads, and how analysts use it to identify the gaps between market consensus and genuine value.
- Definition and scope
- Core mechanics or structure
- Causal relationships or drivers
- Classification boundaries
- Tradeoffs and tensions
- Common misconceptions
- Checklist or steps
- Reference table or matrix
Definition and scope
ADP is a rolling arithmetic mean — sometimes a median — of the draft slot at which a given player has been selected across a sample of completed drafts. If a running back gets taken at picks 8, 11, 9, and 14 across four drafts, the ADP is 10.5. That's the entire mathematical engine. What makes it interesting isn't the formula; it's the population of drafters who generate the data and the market dynamics that population encodes.
The scope of ADP data extends across every major fantasy format: redraft, dynasty, keeper, best-ball, and daily fantasy season-long contests. Each format produces a distinct ADP ecosystem. A wide receiver with a 24-pick ADP in standard redraft might carry a dynasty ADP equivalent to a first-round rookie pick — same player, entirely different valuation context because the time horizon changes everything.
Major platforms that publish ADP data publicly include ESPN, Yahoo Fantasy, Sleeper, and FantasyPros (which aggregates across platforms). The value over replacement player framework depends heavily on ADP as its cost input — you cannot calculate surplus value without knowing what the market is charging.
Core mechanics or structure
ADP datasets are built from mock drafts and actual live drafts, depending on the source. FantasyPros weights its aggregate ADP toward live drafts on the grounds that mock drafts involve lower stakes and therefore less representative decision-making — a reasonable methodological choice, though one that introduces recency bias as the season approaches.
The sample size matters enormously. An ADP built from 12 drafts is nearly meaningless as a precision instrument; one built from 1,200 drafts starts to behave like a real market price. Platforms typically publish the draft count used to calculate each player's ADP, and that number deserves scrutiny before the figure gets treated as authoritative.
Temporal structure is the other core mechanical feature. ADP changes continuously from the moment offseason data starts flowing — combine results in late February, free agency in March, OTAs in May, training camp in late July, and preseason games in August. A running back's ADP in early May and his ADP on August 20 may differ by 30 or more picks if a depth chart competition resolved in his favor. The draft value tools and software category tracks these movement curves explicitly.
Within a draft, ADP functions as an anchor. Drafters who know a player's ADP tend to drift toward it when making decisions under uncertainty — a well-documented behavioral pattern consistent with anchoring bias as described in behavioral economics literature (Kahneman & Tversky, "Judgment under Uncertainty," 1974).
Causal relationships or drivers
Four forces move ADP in measurable ways:
Injury news produces the sharpest short-term spikes. A starting running back's backup can move 40 picks in under 24 hours following an injury report, as happened repeatedly with handcuff positions throughout NFL seasons. The injury risk and draft value discounting analysis framework quantifies how much discount the market typically applies.
Depth chart changes drive mid-magnitude, sustained moves. A wide receiver promoted to WR1 in a high-volume passing offense might gain 15–20 ADP positions over a two-week period as the news disseminates through the drafter population.
Scoring format effects create systematic ADP divergence across platforms. A tight end in a standard-scoring format and a PPR format can differ by 12+ picks simply because reception volume gets priced differently. This is why custom scoring value adjustments analysts treat ADP as format-dependent data, not universal truth.
Hype and narrative are the most underappreciated driver. A player featured prominently in a beat writer's training camp dispatch will see ADP movement even without a corresponding change in objective situation. This is market noise, and it creates exploitable inefficiencies — the core thesis behind market inefficiencies in fantasy drafts.
League size has a structural relationship with ADP utility: in 10-team leagues, ADP data from 12-team leagues systematically misprices positions because roster construction logic changes when 24 fewer roster spots are available league-wide.
Classification boundaries
ADP data splits cleanly into three operational categories that analysts should treat as distinct instruments:
Consensus ADP aggregates across platforms and formats. It's the broadest signal and the least actionable in isolation because the averaging obscures format-specific pricing.
Platform-specific ADP reflects the user base of a particular platform. ESPN's ADP skews toward casual drafters; Sleeper's skews toward more engaged players who interact with dynasty and keeper formats. Using ESPN ADP to prep for a Sleeper draft introduces a population mismatch.
Mock draft ADP vs. live draft ADP is the most important classification boundary for precision work. Mock drafts overrepresent upside selection — participants take risks they wouldn't in live leagues because there are no actual consequences. Live draft ADP tends to price stars more conservatively (earlier) and speculative upside plays more aggressively (later) than mock data suggests. The mock draft value extraction framework addresses this gap directly.
Tradeoffs and tensions
The central tension in ADP analysis is between following the market and fading it. ADP represents collective intelligence across thousands of drafters — that's a real signal. But markets can be collectively wrong, particularly when they're processing new information slowly or when behavioral biases (recency, narrative, anchoring) dominate.
The tiered drafting methodology approach partially sidesteps this tension by treating ADP as a timing signal within tiers rather than a ranking instruction. If three wide receivers belong in the same value tier, their internal ADP ordering matters far less than knowing when the tier ends and the next, inferior tier begins.
A second tension: ADP from early in the offseason may be more analytically clean — less contaminated by late-breaking narrative — but less accurate about actual draft-day context. ADP from two days before a major industry draft window reflects current reality but may have been whipsawed by a week of camp hype. Neither snapshot is unambiguously superior; the analysis's task is to weight them appropriately.
The projected points vs. draft cost comparison crystallizes a third tension. ADP is a cost metric, not a value metric. A player with a high ADP isn't necessarily overvalued; a player with a low ADP isn't necessarily a bargain. The gap between projected production and draft cost is where value lives — and ADP is only half of that equation.
Common misconceptions
"ADP is a ranking." ADP is a price. It describes what the market charges, not what the market thinks about relative quality. Two players with ADPs of 22 and 24 are nearly interchangeable in market terms; treating that 2-pick gap as a meaningful quality signal misreads the data.
"A player drafted before his ADP is a steal." Draft position relative to ADP tells an analyst where to find value windows, but it doesn't guarantee value was obtained. If a player's ADP is itself inflated relative to projection, getting him 3 picks early still means overpaying.
"ADP reflects expert opinion." Major platform ADP reflects the behavior of platform users, who are a broad population of varying sophistication. Expert ADP products (like those from established analysts and consensus ranking services) are a distinct dataset and often diverge meaningfully from public ADP by position — gaps of 8–15 picks for tight ends and quarterbacks are common in standard formats.
"ADP is stable." The volatility of ADP in the five weeks before a typical industry draft window can move a player's position 25+ picks based on events unrelated to long-term value. Treating a mid-July ADP as a fixed reference point for an August draft is a structural error.
Checklist or steps
The following sequence describes how ADP analysis is applied in systematic draft preparation:
- Establish format parameters — confirm scoring system (PPR, half-PPR, standard), league size (8, 10, 12, 14 teams), roster settings, and starting requirements before sourcing ADP data.
- Select platform-appropriate ADP — match ADP source to the platform where the draft will occur; avoid cross-platform population mismatch.
- Note sample size — verify the draft count behind each player's ADP figure; treat sub-100-draft samples as directional, not precise.
- Record date of ADP snapshot — log when the data was pulled; tag any player with known news events since that date.
- Calculate ADP-to-projection gaps — compare projected finish rank (by points) to ADP rank; gaps of 5+ positions in either direction flag candidates for further analysis.
- Identify tier boundaries — map ADP clusters to determine where natural scarcity breaks occur by position.
- Assign value grades — label each player's ADP as reflecting positive value (projected above ADP), neutral, or negative value (projected below ADP).
- Track ADP movement daily in final 2 weeks — log directional changes; rising ADP on unchanged news signals narrative inflation rather than fundamental improvement.
Reference table or matrix
ADP Data Type Comparison Matrix
| ADP Type | Source Population | Typical Bias | Best Use Case | Reliability Window |
|---|---|---|---|---|
| Consensus (aggregated) | Broad cross-platform | Narrative-driven moves amplified | General market benchmarking | 4–8 weeks before draft |
| Platform-specific | Single platform user base | Varies by platform demographics | Same-platform draft prep | 2–4 weeks before draft |
| Expert consensus ADP | Analysts and rankers | Undervalues casual-use positions | Identifying public/expert gaps | Continuous through draft |
| Mock draft ADP | Low-stakes participants | Upside-seeking, risk-tolerant | Understanding directional trends | Supplementary only |
| Live draft ADP | Actual-stakes participants | Conservative at top, aggressive in middle | Most accurate cost signal | Final 2 weeks |
| Dynasty ADP | Long-term focused players | Age and trajectory weighted heavily | Dynasty-specific roster decisions | Format-specific |
| Best-ball ADP | High-volume casual/advanced | Upside and ceiling maximization | Best-ball drafting contexts | Format-specific |
The resource at draftvalueanalytics.com anchors this analysis within a broader framework of draft capital and positional economics — ADP being one instrument in a larger toolkit rather than a complete valuation system on its own.