Barber Sped Hub
SLD Identification Approaches
Cross-Battery · PSW Models · C-SEP · RTI/MTSS
← Hub
Three Approaches — Four Frameworks

IDEA 2004 does not mandate a single method for SLD identification — it prohibits a severe discrepancy as the only criterion and allows RTI and other research-based methods. Texas TAC §89.1040 permits multiple approaches. The frameworks below represent the major current options. PSW has three distinct models used in Texas, each with different criteria, tools, and theoretical grounding.

TESTING-HEAVY DATA-FIRST XBA CHC Cross-Battery High COG test burden Multi-battery required PSW 3 models Pattern of S&W Dehn · Flanagan · C-SEP COG + achievement pattern C-SEP Texas C-SEP / Schultz & Stephens Existing data first Targeted testing only RTI MTSS Response to Intervention Intervention data primary Lowest COG test burden
Approach 1
Cross-Battery Assessment
XBA / CHC-based
Systematically samples cognitive and academic abilities across multiple batteries using CHC theory to ensure adequate construct coverage.
CHC-based framework
Approach 2 · Three Models
Patterns of Strengths & Weaknesses
PSW — Dehn · Flanagan DD/C · C-SEP
Identifies SLD through a cognitive-academic pattern: processing weakness + academic weakness + cognitive strength. Three distinct models are used in Texas, each with different operationalizations and tools.
Dehn PSWM/PPA Flanagan DD/C C-SEP · Texas-developed
Approach 3
Response to Intervention / MTSS
RTI / MTSS
Documents SLD through failure to respond to evidence-based instruction delivered in a tiered support system, supplemented by comprehensive evaluation data.
IDEA-supported; Texas TAC §89.1040(d)
Side-by-Side Comparison
Dimension XBA PSW C-SEP RTI / MTSS
Primary basis for SLD Processing deficits + academic underachievement; CHC construct coverage Cognitive weakness + academic weakness + cognitive strength pattern Existing multiple data sources + targeted testing only when data signals a gap; PSW pattern can be documented but existing data, intervention history, and professional judgment are primary — philosophically closer to RTI than PSW Inadequate response to validated instruction across tiers; academic performance data
Theoretical grounding CHC theory (Cattell-Horn-Carroll); cognitive science CHC theory + discrepancy logic; multiple PSW models exist (Naglieri, Flanagan-Kaufman, Hale) RTI-aligned philosophy; CHC theory as organizational structure; multiple data sources as primary evidence; professional judgment; task demands analysis; oral language integration Behavioral/instructional science; evidence-based practice; data-based decision making
Role of IQ/COG battery Central — must sample all relevant CHC narrow abilities Central — cognitive pattern is the defining criterion Central — core battery administered first; selective additions added only when data indicates a gap Supplementary — not required to establish SLD; supports exclusionary ruling
Standardized test burden High — multiple batteries required to fill construct gaps High — cognitive + achievement batteries required across domains Moderate — core battery + selective targeted additions only when warranted; designed to be more efficient than other PSW models Low to moderate — CBM/progress monitoring + targeted evaluation
Required data beyond testing Observation, history, classroom data (supportive) Observation, history, classroom data (supportive) Existing data (Review step) is foundational — screening history, prior records, observations, work samples, parent/teacher input organized before testing begins Tiered intervention data, fidelity documentation, progress monitoring — all essential
Texas TAC alignment Permitted — no specific mention; aligns with comprehensive evaluation requirements Permitted — aligns with §89.1040(d) research-based methods Well-aligned — Texas-developed; aligns with §89.1040 comprehensive evaluation requirements; PSW approach grounded in multiple data sources Explicitly supported — §89.1040(d) permits RTI; MTSS is state-promoted framework
Dyslexia Handbook alignment Compatible — processing measures required; XBA provides systematic coverage Compatible — phonological processing deficit fits cognitive weakness criterion Well-aligned — phonological processing documentation, multiple data sources, and failure-to-respond requirements all addressed within C-SEP framework Strong alignment — handbook requires documented failure to respond to systematic reading instruction
Reliability / validity concerns Moderate — cross-battery mixing norms is a known limitation; requires careful interpretation High — PSW pattern criteria vary by model; low inter-rater reliability in research; critiqued for circular reasoning Moderate — shares some PSW critiques; training required for fidelity; less nationally recognized than RTI but well-established in Texas diagnostician community Moderate — RTI alone insufficient without comprehensive evaluation; quality depends on fidelity of Tier 2/3 implementation
Status in Texas Active — systematic CHC-based supplement; used when single battery has construct gaps Active — LDA-endorsed; Texas & TEDA trained; PSW is primary diagnostic approach in Texas Active — Texas-trained network; 2024 handbook released; growing through TEDA training Active — IDEA 2004 supported; Texas TAC §89.1040(d) permits; required prior to many evaluations
Best suited for Complex profiles needing construct-level analysis; diagnosticians with CHC training Cases where cognitive pattern documentation is needed; experienced evaluators Cases with rich existing data; students with intervention history; diagnosticians trained in C-SEP; districts wanting an RTI-compatible framework that still meets comprehensive evaluation requirements School-based SLD identification; early intervention contexts; most typical referrals
The IDEA 2004 Shift — What Changed
⚖️ What IDEA 2004 Actually Says

IDEA 2004 made three key changes relevant to SLD identification:

  • Prohibited states from requiring a severe ability-achievement discrepancy as the sole criterion for SLD identification.
  • Permitted (but did not require) the use of a process that examines whether the student responds to scientific, research-based intervention — the RTI/MTSS pathway.
  • Permitted the use of "other alternative research-based procedures" — the opening for XBA, PSW, and similar approaches.
⚖️ Known Critiques & Debates
  • Norm mixing problem: XBA combines subtests from different batteries with different normative samples — a recognized psychometric limitation.
  • PSW reliability: Multiple competing PSW models (Naglieri, Flanagan-Kaufman, Hale-Fiorello) produce different classification outcomes on the same student. Inter-rater reliability is low.
  • Circular logic critique: PSW's cognitive-achievement discrepancy resembles the discrepancy model IDEA 2004 was designed to move away from.
  • Ongoing research debate: Some researchers (Stuebing, Fletcher) have raised concerns about PSW identification rates and consistency across models; proponents (Flanagan, Hale, Dehn) cite the importance of cognitive assessment in identifying the specific processing mechanisms underlying SLD.
  • Practical considerations: Both XBA and PSW approaches require specialist training and produce documentation that must be clearly communicated to ARD teams.
Note on C-SEP: C-SEP (Schultz & Stephens) is an actively developed, Texas-grounded approach. While classified as PSW, its founders align it philosophically with RTI — placing existing data and multiple sources first, and treating standardized testing as a selective tool. C-SEP has a distinct data-first structure compared to the Dehn and Flanagan DD/C models.
🔀 Looking for battery-level cluster mapping? The SLD Framework Application Reference shows which WJ-V and WISC-V clusters apply to each framework — XBA CHC domain pairing, PSW cognitive–achievement correspondence, and C-SEP reference point selection.
Approach 1 · CHC-Based Framework
Cross-Battery Assessment
A CHC-grounded framework that systematically samples cognitive and academic abilities across multiple standardized batteries to ensure adequate construct coverage — addressing the gap-filling limitation of single-battery evaluations.
📐
Active
Systematic CHC-based construct coverage
Overview & Theoretical Basis
🧠 What It Is

Developed by Dawn Flanagan, Samuel Ortiz, and colleagues, XBA applies CHC (Cattell-Horn-Carroll) theory as an organizing framework for selecting and interpreting tests across multiple batteries. The core premise is that any single battery has measurement gaps at the narrow-ability level — XBA fills those gaps by systematically supplementing with subtests from other batteries.

The approach uses "classifying" and "qualifying" criteria: subtests are classified by their CHC construct, and a broad ability is considered adequately measured only when at least two qualitatively different narrow-ability indicators are available from the same normative period.

The XBA Worksheets (Flanagan, Ortiz, Alfonso) provide a structured template for organizing cross-battery data by CHC broad and narrow ability.
📐 CHC Framework Links

XBA maps directly onto CHC theory. For SLD identification, the key broad abilities typically assessed are:

  • Grw — Reading/Writing (decoding, fluency, comprehension, spelling)
  • Ga — Auditory Processing (phonetic coding — core for dyslexia)
  • Gsm — Short-Term/Working Memory (memory span, WM capacity)
  • Gs — Processing Speed (RAN, fluency tasks)
  • Gf — Fluid Reasoning (when ruling out ID or reasoning deficits)
  • Gc — Crystallized Intelligence (language, vocabulary — for exclusionary purposes)
The 2018 XBA 3rd edition updated classifications to align with CHC v2.0 (McGrew & Schneider), adding Gl (Long-Term Storage) and Gv considerations.
Required Components & Steps
📋 XBA in Practice — Core Principles

The XBA process begins by identifying which CHC broad and narrow abilities are most relevant to the referral question, then systematically selecting a primary battery and supplementing it to ensure adequate construct coverage. Once the cross-battery data is organized — typically using the XBA worksheets or XBASS software — the evaluator examines whether a processing deficit is present, whether it is linked to the observed academic weakness, and whether exclusionary factors have been ruled out.

The key principle is that no single battery fully covers the CHC model at the narrow-ability level. XBA addresses this by treating subtests from any co-normed or similarly-normed battery as interchangeable within CHC constructs — allowing for a more complete assessment picture without redundancy.

For the full XBA framework — including construct classification tables, norming guidelines, and the XBASS software — see Essentials of Cross-Battery Assessment (Flanagan, Ortiz & Alfonso, 3rd ed.) and the companion XBASS software. This overview is intended as a conceptual orientation, not a procedural substitute for the published resource and formal training.
Strengths & Limitations
⚖️ Pros & Cons
Strengths
+ Theoretically grounded in CHC — the most empirically supported cognitive theory
+ Forces systematic coverage of all relevant constructs; reduces measurement gaps
+ Allows evaluators to use their existing batteries flexibly
+ Clear cognitive-academic linkage aids instructional planning
+ Well-documented method with published resources (Flanagan et al.)
Limitations
Norm mixing: combining subtests from different standardization samples is psychometrically problematic
High test burden — time-intensive for both evaluator and student
Requires deep CHC knowledge to implement correctly
XBASS software adds cost; manual worksheets are cumbersome
Results in complex documentation that can be hard to translate for ARD teams
🔬 When to Use XBA Clinically
  • Complex profiles where a single battery clearly doesn't cover the relevant constructs (e.g., WISC-V only for a dyslexia referral with no phonological measures).
  • Re-evaluations where prior testing used a different battery and you need to supplement rather than re-administer everything.
  • Construct-specific questions — e.g., "Is working memory or processing speed driving the academic deficit more than phonological processing?"
  • When you have CHC training and your district is comfortable with the documentation approach.
Practical note: In Texas school settings, most diagnosticians are effectively using XBA principles informally (e.g., adding CTOPP-2 to WISC-V evaluations) without necessarily labeling it XBA. The systematic framework simply makes that practice explicit and defensible.
Tools Commonly Used
🧪 XBA — Assessment Battery

Core cognitive batteries (choose one as anchor):

WJ-V COG WISC-V KABC-II CAS-2 DAS-II

Common supplements to fill Ga (phonological) gap:

CTOPP-2 WJ-V ACH Sound Awareness WIAT-IV Phonological Processing PAF / PAST

Achievement batteries (Grw):

WJ-V ACH WIAT-IV KTEA-3 WRMT-III

Organization tools:

XBASS Software (Ortiz) XBA Worksheets (Flanagan et al., 3rd ed.)
🤠

Texas Applicability — XBA

Texas TAC §89.1040 requires a comprehensive individual evaluation that uses a variety of assessment tools and strategies — XBA satisfies this requirement when properly implemented. The state does not prescribe a specific cognitive model, so CHC-based cross-battery analysis is permissible.

The Texas Dyslexia Handbook requires evaluation of phonological processing, which aligns directly with XBA's emphasis on filling Ga construct gaps with CTOPP-2 or equivalent. XBA is most defensible in Texas when the cross-battery additions are clearly documented by CHC construct in the FIE, and cognitive-academic linkages are explicitly stated.

Practical reality: Most Texas diagnosticians are not formally labeling their approach "XBA" — but many are applying XBA principles when they add CTOPP-2, TOC, or TOWRE-2 to supplement a WISC-V or WJ-V COG battery. The XBA framework provides the theoretical justification for why those additions matter.

Approach 2 · Three Texas Models
Patterns of Strengths & Weaknesses
PSW identifies SLD through a specific cognitive-academic pattern: a processing weakness that co-occurs with an academic weakness, while other cognitive abilities remain relatively intact. In Texas, three distinct PSW models are in use — each with different criteria, tools, and theoretical grounding. Select a model tab above to explore each in depth.
⚖️
3 Models
Dehn · Flanagan DD/C · C-SEP
The Three PSW Models Used in Texas
🧠 Dehn — PSWM & PPA

Processing Strengths and Weaknesses Model (PSWM) and the Psychological Processing Analyzer (PPA) — developed by Milton Dehn.

Operationalizes PSW through CHC-grounded processing constructs with particular emphasis on memory and executive function. The PPA software scores and integrates results across batteries to identify the required discrepancy/consistency pattern.

WISC-V WJ-V COG PPA Software
Click to view full Dehn PSWM tab →
📊 Flanagan — Dual Discrepancy/Consistency

DD/C Model (Flanagan, Ortiz & Alfonso) — requires dual evidence: a processing weakness that is discrepant from cognitive strengths, AND consistent with (i.e., predicts) the observed academic weakness.

The most formally operationalized PSW model — uses XBA framework and XBASS software for statistical comparison. Closely tied to CHC theory and cross-battery analysis principles.

Any CHC battery XBASS Software
Click to view full DD/C tab →
🤠 C-SEP — Schultz & Stephens

Core-Selective Evaluation Process — Texas-developed by Dr. Edward Schultz (Midwestern State) and Dr. Tammy Stephens. First published in DiaLog (TEDA's journal) in 2015; updated 2024 handbook.

Although classified as a PSW model, C-SEP's founders align it philosophically with RTI — placing existing data first and using standardized testing selectively. Review → Plan → Assess → Decide.

WJ-V / WISC-V / KABC-II csep.online
Click to view full C-SEP tab →
PSW Shared Foundations & The Texas Context
🧩 What All PSW Models Share

Despite meaningful differences in operationalization, all three models share a common logical framework:

  • Cognitive weakness (W) — at least one processing area that falls significantly below average and below the student's own cognitive strengths
  • Academic weakness (Aw) — achievement deficit in the domain linked to the cognitive weakness
  • Cognitive strength (S) — at least one cognitive area that remains at or above average, demonstrating the deficit is specific rather than global
  • Theoretical link — the cognitive weakness must be theoretically and empirically connected to the academic deficit (e.g., phonological processing → decoding)
  • Exclusionary factors — all IDEA/Texas TAC exclusions must be ruled out
The Learning Disabilities Association of America (LDA) endorses PSW models as most aligned with current research on SLD identification, noting that SLD is defined as a disorder in basic psychological processes — making cognitive assessment central. ldaamerica.org ↗
The models diverge on how these components are operationalized — which instruments, which cutpoints, which statistical comparisons, and whether algorithmic or judgment-based approaches are used.
⚠️ The PSW Controversy

PSW has been the subject of significant professional debate since 2010:

  • Model inconsistency: Studies show the PSW models classify different students as SLD even with identical data — undermining claims that they measure the same construct.
  • Circular reasoning: Requiring a cognitive-achievement discrepancy pattern resembles the ability-achievement discrepancy model IDEA 2004 was designed to move away from.
  • Research base: Proponents (Hale, Naglieri, Flanagan) and critics (Fletcher, Stuebing, Miciak) have reached opposite conclusions — the field has not reached consensus.
  • LDA endorses PSW: The Learning Disabilities Association of America specifically endorses PSW models as most aligned with current research on SLD, citing that LD is defined as a disorder in basic psychological processes — making cognitive assessment central to identification.
  • Texas-specific critique (2025): A peer-reviewed analysis of TEA guidance documents (Pater-Rov, 2025) found that TEA's own SLD guidance continues to reference PSW as an acceptable method despite the lack of evidence supporting it. The review also noted that TEA's guidance describes only RTI and PSW methods, leaving no room for component-based approaches. Pater-Rov recommends removing PSW model references from TEA guidance in favor of language aligned with current evidence-based practice research.
Note on C-SEP: C-SEP is classified as PSW on the Texas conference slide, but its founders align it philosophically with RTI. It is best understood as a bridge model — data-first, selective-testing, and RTI-compatible while meeting comprehensive evaluation requirements.
Bottom line for Texas practice: PSW is legally permissible under TAC §89.1040 and widely used statewide — but the evidentiary debate is real and ongoing. Document your model explicitly, rely on multiple data sources, and avoid basing eligibility on cognitive pattern alone. The more a determination rests on functional, educational, and intervention data in addition to cognitive pattern, the more defensible it is. Source: Pater-Rov, M. (2025). Literature Reviews in Education and Human Services, 4(2), 41–54; Fletcher & Miciak (2019).
Model Comparison at a Glance
Feature Dehn PSWM / PPA Flanagan DD/C C-SEP (Schultz & Stephens)
Theoretical base CHC theory with emphasis on memory & executive function processing constructs CHC theory — identical broad ability framework as XBA CHC theory + RTI philosophy; oral language prominently integrated
Core requirement Processing weakness + academic weakness + relative cognitive strength; consistency of weakness, discrepancy from strength Dual requirement: processing weakness must be discrepant from strengths AND consistent with (predictive of) academic deficit Review of existing data → core battery → selective additions; PSW pattern integrated with all data sources via professional judgment
Decision process PPA software scores and integrates cross-battery data algorithmically XBASS software (or manual XBA worksheets) — statistical discrepancy/consistency analysis Professional judgment throughout; structured worksheet supports data triangulation; not algorithm-driven
Software / tools PPA (required) XBASS (strongly recommended) No proprietary software required — C-SEP worksheets are free
Test burden Moderate-high — multiple batteries often needed to fill construct gaps for PPA High — XBA fill-in approach requires systematic supplementation Designed to be efficient — core battery + selective additions only; lowest burden of the three
Texas familiarity Moderate — used by some diagnosticians; PPA less widely known than DD/C Moderate — well-known but requires XBA training High — Texas-developed; trained extensively through TEDA; 2024 handbook
Philosophical alignment Traditional PSW — begins with cognitive testing Traditional PSW — begins with cognitive testing; tightly XBA-integrated RTI-aligned philosophy — begins with existing data; testing is selective
🤠

Texas Applicability — PSW Generally

Texas TAC §89.1040(d) permits "other research-based procedures" for SLD identification, which encompasses all three PSW models. The state does not mandate a specific model. All three are legally permissible in Texas when properly implemented with comprehensive evaluation data, documented exclusionary factors, and multiple data sources.

The 2025 TEDA conference explicitly identified all three models — Dehn PSWM, Flanagan DD/C, and C-SEP — as the three most commonly used PSW approaches in Texas, reflecting their growing presence in Texas diagnostic practice.

Critical caveat: When using any PSW model, document which model you are applying and state your criteria explicitly. Vague references to "a pattern of strengths and weaknesses" without specifying the model are difficult to defend at due process.

A 2025 peer-reviewed analysis (Pater-Rov) found that TEA's guidance documents have improved over time but still contain inconsistencies — including continued reference to PSW approaches despite the lack of a strong evidence base, and the absence of component-based models as an option. Texas diagnosticians should be aware that the state's own policy documents are a work in progress. Use multiple data sources, document your reasoning, and do not rely on cognitive pattern alone.

PSW Model 1 · Dehn
Processing Strengths & Weaknesses Model
Developed by Milton Dehn, the PSWM operationalizes PSW through CHC-grounded processing constructs with particular emphasis on working memory, executive function, and processing speed. The Psychological Processing Analyzer (PPA) software integrates cross-battery scores to identify the required discrepancy/consistency pattern algorithmically.
Active
Requires PPA software; used by trained evaluators
Overview & Theoretical Basis
🧠 What It Is

Milton Dehn developed the PSWM as a CHC-grounded approach that operationalizes PSW criteria using standardized cognitive and academic test data. Dehn's framework places particular emphasis on cognitive processing constructs — especially working memory (Gsm), processing speed (Gs), and executive function — as the mechanisms underlying most SLD presentations.

The model requires the evaluator to:

  • Identify at least one processing weakness (W) below a specified cutpoint
  • Identify at least one cognitive strength (S) that is both above the cutpoint and significantly discrepant from the weakness
  • Confirm a corresponding academic weakness (Aw) in the same functional domain
  • Demonstrate the cognitive weakness is consistent with the academic weakness and discrepant from the cognitive strength
Dehn's books Working Memory and Academic Learning (2008) and Essentials of Processing Assessment (2nd ed., 2014) provide the theoretical and practical foundation for this model.
💻 The PPA — Psychological Processing Analyzer

The Psychological Processing Analyzer (PPA) is scoring software developed by Dehn that integrates subtest and composite scores from multiple batteries and applies the PSWM criteria algorithmically. It is the primary implementation tool for this model.

Key features of the PPA:

  • Accepts scores from multiple cognitive batteries (WISC-V, WJ-V COG, KABC-II, CAS-2, and others)
  • Classifies each subtest by CHC broad and narrow ability
  • Identifies which subtests meet "strength" vs. "weakness" criteria based on normative and intra-individual comparison
  • Determines whether the required discrepancy/consistency pattern is present
  • Generates a report-ready summary of the PSW analysis
Important: The PPA is a purchased software tool — it is not free. Access to the PPA and familiarity with Dehn's processing framework are prerequisites for implementing this model with fidelity.
PSWM Criteria & Process
📋 PSWM Implementation — Step by Step
1
Select batteries to sample the relevant processing constructs. Choose a primary cognitive battery and supplement as needed to adequately cover the constructs of concern. Dehn emphasizes working memory (Gsm), processing speed (Gs), long-term retrieval (Glr), and phonological processing (Ga) for reading-related SLD.
Common combination: WISC-V (WMI, PSI) + WJ-V COG (Gsm, Gs clusters) + CTOPP-2 (Ga — phonological).
2
Enter scores into the PPA software. Input subtest and composite scores from all administered batteries. The PPA classifies each measure by CHC construct and flags which fall in the "weakness" range (typically SS < 85 or below the student's own mean) and which qualify as "strengths."
The PPA's intra-individual comparison is a key feature — it identifies strengths and weaknesses relative to the student's own cognitive profile, not just normative cutpoints.
3
Confirm the discrepancy between cognitive strength(s) and cognitive weakness. The processing weakness must be statistically discrepant from at least one cognitive strength area. The PPA calculates this discrepancy and indicates whether it meets the required threshold.
Dehn uses both normative (vs. population mean) and ipsative (vs. student's own mean) comparison to identify this discrepancy.
4
Document the academic weakness in the corresponding domain. Administer achievement measures in the area linked to the cognitive processing weakness. The academic score must also fall below the normative cutpoint and be consistent with (lower than or equal to) the cognitive weakness.
For dyslexia: Phonological processing weakness (Ga) → Basic Reading Skills & Reading Fluency (Grw). Consistency = the academic deficit is not better than the cognitive deficit.
5
Apply the full PSWM pattern check via PPA. The software confirms whether the required W + S + Aw pattern is present and whether all three criteria are satisfied simultaneously. A valid PSW pattern requires: W is below average AND discrepant from S, AND Aw is below average AND consistent with W.
6
Integrate with all other evaluation data and apply exclusionary factors. The PPA result is one component of the FIE — it must be integrated with observations, history, parent/teacher input, and intervention data. All IDEA/Texas exclusionary factors must be explicitly ruled out.
Strengths & Limitations
⚖️ Pros & Cons
Strengths
+ Clear operationalization — explicit criteria reduce subjectivity compared to informal PSW interpretation
+ PPA software handles the cross-battery integration and pattern analysis algorithmically
+ Strong emphasis on working memory and processing speed — consistent with current neuroscience of learning disabilities
+ Can be used with multiple cognitive batteries — not locked to one instrument
+ Intra-individual comparison addresses the "low all around" profile problem
Limitations
PPA software requires purchase — adds cost; must be kept current
Less widely known in Texas than C-SEP or informal PSW application
Shares the general PSW debate — research on inter-model consistency is ongoing; professional judgment required in interpretation
High test burden when multiple batteries are needed to fill construct gaps
Requires training in both CHC processing theory and PPA software operation
🔬 When to Use Dehn PSWM Clinically
  • When you have access to the PPA and training in Dehn's processing framework — the software is what makes this model distinctly more systematic than informal PSW application.
  • Working memory or processing speed is a central concern — Dehn's model is especially well-suited to cases where Gsm or Gs deficits are driving academic difficulties, such as math disability or written expression.
  • When RTI data is insufficient and you want a PSW model with a clearly documented algorithmic decision process.
  • Complex cognitive profiles where intra-individual comparison is needed to identify specific weaknesses that aren't below average normatively but are significantly lower than the student's own cognitive ceiling.
Texas note: The PSWM/PPA model was specifically named at the 2025 TEDA conference as one of the three PSW models used in Texas. While less commonly trained than C-SEP, it is a valid and increasingly recognized option for Texas diagnosticians who invest in PPA access and Dehn's processing assessment framework.
Tools Commonly Used
🧪 Dehn PSWM — Assessment Battery

Cognitive batteries (enter scores into PPA):

WISC-V WJ-V COG KABC-II CAS-2 NEPSY-II (selected subtests)

Key processing constructs targeted (Dehn emphasis):

Gsm — Working Memory (WISC-V WMI, WJ-V Gwm) Gs — Processing Speed (WISC-V PSI, WJ-V Gs) Ga — Phonological (CTOPP-2) Glr — Long-Term Retrieval

Achievement batteries:

WJ-V ACH WIAT-IV KTEA-3

Software:

PPA — Psychological Processing Analyzer (Dehn)
🤠

Texas Applicability — Dehn PSWM

The Dehn PSWM model is permissible under Texas TAC §89.1040(d) as an "other research-based procedure" for SLD identification. The PPA's algorithmic pattern analysis provides clear documentation of the discrepancy/consistency criteria — which can strengthen defensibility compared to informal PSW interpretation.

Named explicitly at the 2025 TEDA conference as one of the three PSW models most commonly used in Texas. Diagnosticians using this model should document that the Dehn PSWM was the specific approach applied, specify the PPA version used, and explicitly state the criteria that were met.

PSW Model 2 · Flanagan, Ortiz & Alfonso
Dual Discrepancy / Consistency Model
The DD/C model (Flanagan, Ortiz & Alfonso) requires two simultaneous conditions: the processing weakness must be statistically discrepant from the student's cognitive strengths, AND consistent with (predictive of) the academic deficit. Implemented through XBA principles and XBASS software.
📊
Active
Rigorous; XBASS-supported; LDA-endorsed PSW model
Overview & Theoretical Basis
📊 What It Is

The Dual Discrepancy/Consistency (DD/C) model was developed by Dawn Flanagan, Samuel Ortiz, and Vincent Alfonso as a PSW approach tightly integrated with CHC theory and the XBA framework. It is sometimes called the "Flanagan-Kaufman" operational definition in older literature, though the DD/C label is now standard.

The model is defined by two simultaneous requirements:

  • Discrepancy: The processing weakness (W) is significantly lower than the cognitive strength (S) — the student has both high and low cognitive abilities, not a uniformly flat profile.
  • Consistency: The processing weakness (W) is statistically consistent with (not significantly different from) the academic deficit (Aw) — the academic performance is no better than what the processing profile would predict.

This dual criterion is intended to address the "false positive" problem in earlier PSW models — requiring not just that a weakness and an academic deficit co-exist, but that they are statistically linked in a specific directional pattern.

The DD/C model is operationalized through the XBA framework — CHC constructs are assessed using cross-battery principles, and XBASS software (or manual XBA worksheets) calculates the required statistical comparisons.
🔗 Relationship to XBA

DD/C and XBA are deeply intertwined — they share the same authors, the same CHC theoretical framework, and the same XBASS software. The key distinction:

  • XBA is a framework for systematic assessment — it ensures construct coverage by filling gaps across batteries.
  • DD/C is a decision model — it specifies the statistical pattern (discrepancy + consistency) required to identify SLD.

In practice, an evaluator using DD/C will first apply XBA principles to ensure adequate construct coverage, then use XBASS to run the discrepancy/consistency analysis on the assembled data set.

The two approaches are best understood as layered: XBA is the how to assess; DD/C is the how to decide. You can do XBA without DD/C, but DD/C requires XBA-organized data.
DD/C Criteria & Process
📋 DD/C Implementation — Step by Step
1
Conduct a cross-battery assessment. Apply XBA principles — select a primary cognitive battery, identify construct gaps, and supplement with targeted measures from other batteries to ensure adequate CHC coverage. Document all scores by CHC broad and narrow ability.
Same process as standard XBA: WISC-V as anchor + CTOPP-2 for Ga, for example.
2
Identify the cognitive weakness (W) and cognitive strength (S). Using XBASS, determine which CHC broad ability areas fall in the below-average range (W) and which fall at or above average (S). The S must be meaningfully higher than the W.
XBASS uses confidence intervals and normative comparisons to classify each cluster as strength, average, or weakness.
3
Confirm Discrepancy — W is significantly below S. XBASS calculates whether the difference between the cognitive weakness and cognitive strength clusters is statistically significant. This is the first "D" in DD/C.
Without a true discrepancy between cognitive strength and weakness, the DD/C pattern is not established — the student may have uniformly below-average cognition, which is a different picture.
4
Assess academic achievement in the domain linked to W. Administer achievement measures targeting the academic skills theoretically linked to the cognitive weakness. Document scores in Grw (reading), Gq (math), or written expression as appropriate.
5
Confirm Consistency — Aw is not significantly different from W. XBASS compares the academic weakness score to the cognitive processing weakness score. For consistency to be established, the academic deficit must be no better than the cognitive deficit — the achievement is what the processing profile would predict.
This is the second condition in DD/C. If achievement is significantly higher than the processing weakness, the processing deficit is not functionally impairing academic performance in the expected way.
6
Apply exclusionary factors and integrate all data. The DD/C statistical pattern is necessary but not sufficient — apply all IDEA exclusions, document observations and history, and confirm the pattern is consistent with all other evaluation data.
Strengths & Limitations
⚖️ Pros & Cons
Strengths
+ Most formally operationalized PSW model — dual criterion addresses false positive risk
+ Tightly grounded in CHC theory — the most empirically supported cognitive model
+ XBASS software handles statistical comparisons, reducing calculation error
+ Clear distinction between discrepancy (W vs. S) and consistency (W vs. Aw) adds theoretical precision
+ Extensive published literature — Flanagan et al. books well-referenced in school psychology
Limitations
Highest test burden of the three models — XBA fill-in approach is time-intensive
XBASS software adds cost; norm-mixing across batteries is a known psychometric limitation
Active research debate on PSW identification rates and inter-model consistency; clinical expertise required
Requires deep CHC and XBA training to implement correctly
Complex documentation can be difficult to translate for ARD teams and parents
🔬 When to Use DD/C Clinically
  • When you have XBA training and XBASS access — the model requires both to implement with fidelity.
  • Complex profiles needing statistical documentation of the cognitive-academic link — DD/C's dual criterion provides the most formal statistical support of any PSW model.
  • Cases where RTI data is unavailable and you need a rigorous alternative-research-based pathway with explicit documentation of discrepancy and consistency.
  • Private/clinical evaluations where a thorough cognitive-academic analysis is expected and time is not the primary constraint.
Practical caution: Many Texas diagnosticians apply informal versions of this logic (processing weakness + academic deficit) without the full XBASS analysis. That informal application does not constitute DD/C — if you are citing DD/C in your FIE, you should have the statistical comparisons to back it up.
Tools Commonly Used
🧪 DD/C — Assessment Battery

Core cognitive batteries (choose one as XBA anchor):

WJ-V COG WISC-V KABC-II CAS-2 DAS-II

Supplements to fill Ga (phonological) gap:

CTOPP-2 WJ-V ACH Sound Awareness WIAT-IV Phonological Processing

Achievement batteries (Grw/Gq):

WJ-V ACH WIAT-IV KTEA-3

Software & frameworks:

XBASS Software (Flanagan, Ortiz, Alfonso) XBA Worksheets (Flanagan et al., 3rd ed.)
🤠

Texas Applicability — Flanagan DD/C

The DD/C model is permissible under Texas TAC §89.1040(d) as a research-based procedure for SLD identification. Its statistical documentation of discrepancy and consistency provides a strong evidentiary foundation when properly implemented with XBASS. Named explicitly at the 2025 TEDA conference as one of the three PSW models most commonly used in Texas.

The Texas Dyslexia Handbook requirement for phonological processing documentation aligns with DD/C's CHC-based approach to identifying Ga weakness as the cognitive deficit. When using DD/C for dyslexia identification, the CTOPP-2 or equivalent provides the Ga cluster data needed for both the discrepancy (vs. other cognitive strengths) and consistency (vs. reading achievement) analyses.

Documentation tip: In the FIE, name the model explicitly ("Dual Discrepancy/Consistency model, Flanagan, Ortiz & Alfonso"), cite the XBASS analysis, and state which discrepancy and consistency criteria were met with the specific scores.

PSW Model 3 · RTI-Aligned Philosophy · Texas-Developed
Core-Selective Evaluation Process
An efficient, comprehensive PSW approach developed by Texas educators Dr. Edward Schultz and Dr. Tammy Stephens. C-SEP begins with core tests from a primary battery, then selectively adds targeted measures only when existing data indicates a possible weakness — reducing test burden while maintaining rigor and legal defensibility.
Active
Texas-based; growing training network; 2024 handbook released
Overview & Theoretical Basis
🧩 What It Is

The Core-Selective Evaluation Process (C-SEP) was developed by Dr. Edward Schultz (Midwestern State University) and Dr. Tammy Stephens, both Texas-based educators with deep roots in educational diagnostics. C-SEP was first introduced in the DiaLog (the journal of the Texas Educational Diagnosticians' Association) in 2015 and has been refined through ongoing field feedback, training, and a 2024 updated handbook.

Although C-SEP can be used to document a pattern of strengths and weaknesses, its founders — particularly Dr. Schultz — have stated in training that C-SEP's philosophy aligns much more closely with the RTI model than with traditional PSW. The framework places existing data and multiple sources of information at the center of the evaluation process, with standardized testing added selectively only when that data demands it. This stands in contrast to PSW models that begin with cognitive testing and look for a predetermined pattern.

Its defining characteristic is the core-then-selective structure: evaluators begin with a thorough Review of all existing data, then — if needed — administer core tests from an established battery (WJ, Wechsler, or Kaufman), adding targeted selective assessments only when the data signals a gap that must be filled to answer the referral question.

C-SEP emerged from a critical analysis of all published PSW approaches, but it is best understood as a data-first, efficiency-focused framework that bridges RTI's emphasis on existing instructional data with the comprehensive evaluation requirements of IDEA and Texas TAC.
📐 CHC Framework & Theoretical Grounding

C-SEP is grounded in CHC (Cattell-Horn-Carroll) theory and uses norm-referenced cognitive, achievement, and oral language measures. It integrates multiple sources of data with individualized assessment to identify a pattern of strengths and weaknesses.

Key theoretical elements:

  • CHC-organized constructs — cognitive and academic abilities are understood through CHC broad and narrow ability framework
  • Multiple data sources — standardized testing is integrated with existing data, observations, work samples, history, and input forms
  • Oral language integration — C-SEP explicitly includes oral language measures alongside cognitive and achievement batteries, recognizing oral language as foundational to academic skill development
  • Professional judgment — C-SEP emphasizes evaluator professional judgment throughout the process rather than algorithmic score comparison
  • Task demands analysis — a structured examination of what skills a task actually requires, used during the decision phase to interpret score patterns meaningfully
The 4-Step C-SEP Process
📋 Review → Plan → Assess → Decide

C-SEP moves through four sequential phases. The framework is designed so that each phase informs the next — existing data shapes the assessment plan, assessment results inform the decision, and professional judgment operates throughout.

1
REVIEW — Gather and analyze all available existing data before any testing begins. The goal is to establish a preliminary picture of strengths and concerns from records, observations, screening history, and input from those who know the student.
2
PLAN — Use the review findings to develop a focused referral question and a targeted assessment plan. The core-selective principle means only administering what the existing data indicates is necessary.
3
ASSESS — Administer the planned battery and any targeted supplemental measures the data has indicated are needed. Cognitive, achievement, and oral language measures are considered based on the referral question.
4
DECIDE — Triangulate all data sources — standardized scores, existing records, observations, work samples, and stakeholder input — and apply professional judgment to reach a legally defensible eligibility decision.
Want the full framework? The complete C-SEP process — including structured worksheets, task demands analysis guidance, and decision criteria — lives in the 2024 C-SEP Handbook (Schultz & Stephens) and in TEDA-sponsored training. This overview is intended to orient you to the approach, not to substitute for the source material or formal training in the model.
Strengths & Limitations
⚖️ Pros & Cons
Strengths
+ Explicitly designed to be efficient — reduces over-testing by only adding selective measures when warranted
+ Texas-developed and trained — aligns with Texas TAC, deeply familiar to Texas educational diagnosticians
+ Emphasizes professional judgment and data triangulation over algorithmic score comparison
+ Integrates oral language alongside cognitive and academic — broader than typical PSW models
+ Includes structured worksheets, a published handbook (2024), and active training network
+ Designed to be legally defensible and documentation-friendly for FIE purposes
Limitations
Can produce a PSW pattern as an output — so it may be subject to some of the same critiques when applied that way, though the founders' intent is broader than PSW pattern-matching
Less nationally recognized than RTI/MTSS; primarily known in Texas and among TEDA members
State alignment documents available for NJ, NM, OK, VA — not yet a published Texas-specific alignment document (though it was developed here)
Training is required to implement with fidelity — not a model evaluators can pick up and use without preparation
🔬 When to Use C-SEP Clinically
  • When RTI data is limited or unavailable — student transferred, was homeschooled, or the district's MTSS system doesn't have adequate documentation. C-SEP provides a comprehensive, legally defensible alternative pathway.
  • When existing data is rich — C-SEP's Review step is particularly powerful when there is a lot of existing data (DIBELS history, prior evals, grades, teacher input). It leverages that data rather than defaulting to a full battery immediately.
  • When test burden is a concern — C-SEP's selective approach is well-suited for students who have already been tested extensively, for re-evaluations, or for younger students where long testing sessions are impractical.
  • For diagnosticians trained in C-SEP — the structured 4-step process and accompanying worksheets make it a practical everyday framework, not just a theoretical model.
  • Complex referrals with multiple data sources — C-SEP's data triangulation and task demands analysis steps are well-suited for cases where scores alone don't tell the full story.
Texas context: C-SEP was developed by and for Texas educational diagnosticians. Dr. Schultz has stated in training that C-SEP aligns more with the RTI model than with traditional PSW — making it a strong choice for districts already operating within an MTSS framework who need a bridge to comprehensive evaluation. Dr. Schultz is co-chair of the Learning Disabilities Association's Professional Advisory Board, and Dr. Stephens trains diagnosticians nationally.
Tools Commonly Used
🧪 C-SEP — Assessment Battery

Core cognitive batteries (start here — choose one):

WJ-V COG WISC-V KABC-II

Core achievement batteries:

WJ-V ACH WIAT-IV KTEA-3

Oral language measures (key C-SEP component):

CELF-5 WJ-V COG Oral Language cluster CASL-2

Selective additions — added only when core data indicates need:

CTOPP-2 (phonological processing) TOC (orthographic processing) TOWRE-2 (reading fluency/efficiency) GORT-5 (oral reading fluency) TWS-5 (spelling detail) KeyMath-3 (math detail)

Existing data sources — central to the Review step:

DIBELS / AIMSweb+ screening history Parent & teacher input forms Work samples Intervention records / fidelity data

C-SEP resources:

2024 C-SEP Handbook (Schultz & Stephens) C-SEP Review & Assessment Worksheet C-SEP Supplemental Resources (150+ pages) 🌐 csep.online ↗
🤠

Texas Applicability — C-SEP

C-SEP is a Texas-developed framework — created by Dr. Edward Schultz (Midwestern State University, Wichita Falls) and Dr. Tammy Stephens, both with deep ties to Texas special education. It was first published in the DiaLog, the journal of the Texas Educational Diagnosticians' Association (TEDA), and has been widely trained in Texas school districts.

C-SEP aligns with Texas TAC §89.1040 as a comprehensive, research-based evaluation approach that uses multiple data sources and addresses all required SLD criteria. Its emphasis on existing data review, targeted assessment planning, and professional judgment is well-suited to Texas diagnosticians' daily practice.

The Texas Dyslexia Handbook requirements — phonological processing documentation, multiple data sources, documentation of failure to respond — are all addressed within the C-SEP framework. C-SEP is one of the most practical and contextually appropriate PSW-based approaches for Texas educational diagnosticians, particularly those who have completed C-SEP training.

Training available: Dr. Schultz and Dr. Stephens offer face-to-face, distance, and on-demand training for both individual evaluators and district-wide implementation. Free "Beyond the Score" webinars are offered periodically. Visit csep.online for current training offerings.

Approach 3 · RTI / MTSS
Response to Intervention / MTSS
Documents SLD through a student's inadequate response to high-quality, evidence-based instruction delivered in a multi-tiered support system. IDEA 2004-supported pathway required in Texas prior to most evaluations, and an essential complement to PSW-based identification.
📋
Active
IDEA-supported; required prior to most evaluations
Overview & Theoretical Basis
📈 What It Is

RTI (Response to Intervention) / MTSS (Multi-Tiered System of Supports) uses a student's response — or failure to respond — to evidence-based instruction as a central component of SLD identification. Rather than looking for a cognitive-academic discrepancy, RTI asks: Did the student fail to make adequate progress despite high-quality, fidelity-implemented instruction at increasingly intensive levels?

Three-tier structure:

  • Tier 1: High-quality core instruction for all students. Universal screening (DIBELS, AIMSweb+) identifies at-risk students. ~80% of students should be successful at Tier 1.
  • Tier 2: Supplemental, targeted small-group intervention (3–4x/week, ~30 min). Progress monitoring every 1–2 weeks. ~15% of students may need Tier 2.
  • Tier 3: Intensive, individualized intervention. Frequent progress monitoring. Students who fail to respond here are primary candidates for special education evaluation. ~5% of students.
RTI does NOT replace comprehensive evaluation — IDEA still requires a full evaluation including cognitive, academic, behavioral, and health data. RTI data provides the "failure to respond" documentation that is one criterion for SLD.
⚖️ Legal Basis & NASP Position

IDEA 2004, 34 CFR §300.307–300.311:

  • States may NOT require a severe discrepancy between IQ and achievement as the sole criterion for SLD.
  • States must permit the use of a process based on the student's response to scientific, research-based intervention.
  • States may permit the use of other alternative research-based procedures.

NASP Position (2011, reaffirmed 2020):

  • Endorses RTI/MTSS as the preferred SLD identification model.
  • Does not endorse PSW as a validated primary identification method.
  • Emphasizes that RTI must be coupled with a comprehensive evaluation — not used as a standalone SLD determination.
LDA's position is that RTI data should optimally be part of every SLD evaluation — but RTI data alone is not sufficient for SLD identification. A PSW-based comprehensive evaluation that integrates RTI data represents best practice in Texas.
Required Components & Steps
📋 RTI-Based SLD Identification Process
1
Document Tier 1 core instruction quality and universal screening results. Must show the student received high-quality, evidence-based core instruction. Universal screening data (DIBELS, AIMSweb+) establishes baseline risk status.
Texas requirement: 19 TAC §89.1011 requires districts to implement a comprehensive, ongoing student assessment system. DIBELS/AIMSweb+ benchmark data is critical evidence here.
2
Document Tier 2 intervention — program, duration, frequency, fidelity. Specify the evidence-based intervention used, group size, frequency, and duration. Attach fidelity implementation data. Show progress monitoring data across the Tier 2 period.
Inadequate fidelity = the RTI data cannot be used for SLD purposes. Fidelity documentation is essential and frequently the weakest part of RTI eligibility files.
3
Document Tier 3 intensive intervention if applicable. Same requirements as Tier 2 — program, frequency, duration, fidelity, progress monitoring. The pattern of non-response across both Tier 2 and Tier 3 is the most compelling evidence.
Dual discrepancy model (Fuchs & Fuchs): student is both below the peer benchmark AND showing a slope (growth rate) significantly below peers — both criteria together predict SLD more accurately than either alone.
4
Conduct a comprehensive individual evaluation. RTI data alone does not establish eligibility. A full FIE must include: cognitive/intellectual assessment, academic achievement testing, observations, developmental and health history, parent and teacher input, and review of existing data.
Texas TAC §89.1040 requires a full individual evaluation regardless of the identification approach used.
5
Apply the SLD criteria — all eight areas, exclusionary factors, and data integration. Document which of the eight SLD areas is affected, apply all exclusionary factors, and integrate the RTI data with comprehensive evaluation findings into an eligibility determination.
The RTI data addresses "failure to respond to instruction" — the comprehensive evaluation addresses "the student has a specific learning disability" as defined by IDEA.
6
Ensure the evaluation team (not RTI data alone) makes the eligibility decision. The ARD committee — including the parent — makes the eligibility determination. RTI data informs but does not replace that team decision.
Evaluating RTI Data Quality — A Diagnostician's Lens
🔍 What Should Be in a Well-Documented RTI Referral?

As a diagnostician, you are not running the RTI process — you are auditing whether it was run well. A useful organizing lens for this audit is the RIOT/ICEL framework (Wright, 2010), which describes the breadth of data that should be present in a strong RTI file before a referral reaches your desk.

RIOT — Four data source types the campus should have drawn from:

R — Review
Records, report cards, screener benchmarks, work samples, attendance, prior evaluation data, DIBELS/AIMSweb+ history
I — Interview
Teacher input on instructional response, parent input on home/history, student interview when appropriate — across more than one person
O — Observe
Documented classroom observation — on-task behavior, response to instruction, peer comparison, environmental factors; not just teacher report
T — Test
Universal screener data, CBM progress monitoring, diagnostic assessments, curriculum-based measures — across tiers

ICEL — Four domains the data should address:

Instruction
Was teaching evidence-based and delivered with fidelity?
Curriculum
Are skill gaps in the curriculum documented — not just test failure?
Environment
Were class size, peer factors, home context considered?
Learner
Student-specific traits — attention, motivation, self-efficacy, study skills
Diagnostician takeaway: The strongest RTI referral files draw from multiple RIOT sources across multiple ICEL domains — not just CBM slopes and one teacher interview. When reviewing existing data, use this lens to identify gaps. A file heavy on Test data (screening scores) but thin on Interview, Observation, and Curriculum/Environment information is incomplete — and that gap belongs in your REED documentation and informs what your FIE must address.
Red flags in RTI files: Only one data source type (test scores only) · Environmental/instructional factors never explored · No documented observation · Parent input absent · All data from a single rater · No evidence of differentiated Tier 1 before Tier 2 placement

Framework: Wright, J. (2010). The RIOT/ICEL matrix: Organizing data to answer questions about student academic performance & behavior. How RTI Works Series. interventioncentral.org

Strengths & Limitations
⚖️ Pros & Cons
Strengths
+ IDEA 2004 explicitly supports it — a legally required component of most SLD evaluations
+ Directly answers the instructional question: "Did the student get good teaching and still struggle?"
+ Lower test burden — comprehensive evaluation supplemented by existing data
+ Earlier identification — screening and progress monitoring catch students before formal evaluation
+ Progress monitoring data continues to be useful for IEP goal-setting and service planning
+ LDA and NASP both recognize RTI as a necessary component; LDA specifically endorses PSW as the identification framework RTI feeds into
Limitations
Depends heavily on quality of Tier 2/3 implementation — poor fidelity makes data unusable
Does not identify the specific processing mechanism — "failed to respond" doesn't tell you why
Risk of "waiting to fail" — students may spend years in tiers before evaluation
Not suited to students who transfer, had inadequate instruction elsewhere, or were homeschooled
Quality of MTSS implementation varies widely across Texas districts
🔬 When to Use RTI Clinically
  • All school-based SLD referrals — RTI/MTSS data should be part of every evaluation, documenting response to instruction before or alongside comprehensive PSW-based assessment.
  • Dyslexia identification — the Texas Dyslexia Handbook explicitly requires documented failure to respond to systematic, explicit reading instruction. RTI data is the most direct evidence of this.
  • Early elementary referrals (K–2) — RTI/MTSS data is especially powerful here because intervention is recent, progress monitoring is frequent, and the "failure to respond" picture is clear.
  • Re-evaluations — ongoing progress monitoring data from special education services provides the same kind of RTI documentation for continued eligibility.
Best practice combination (Texas): RTI tier data (failure to respond) + PSW-based comprehensive evaluation (CTOPP-2 + cognitive battery + achievement battery) = the most complete and defensible Texas SLD evaluation. LDA endorses PSW as the identification framework; RTI documents the instructional context.
Tools Commonly Used
🧪 RTI / MTSS — Assessment and Data Sources

Universal screening & progress monitoring (RTI data):

DIBELS 8th Edition AIMSweb+ STAR Reading / Math iReady Diagnostic easyCBM

Comprehensive evaluation — cognitive:

WISC-V WJ-V COG KABC-II

Comprehensive evaluation — processing (complements RTI):

CTOPP-2 TOC TOWRE-2

Comprehensive evaluation — achievement:

WJ-V ACH WIAT-IV KTEA-3 GORT-5 (fluency)

Intervention programs (Tier 2/3 fidelity documentation):

Barton Reading & Spelling Wilson Reading System RAVE-O 95 Phonics Core Program iReady Instruction
🤠

Texas Applicability — RTI / MTSS

Texas TAC §89.1040(d) explicitly permits the use of RTI as part of SLD identification: "A group may determine that a child has a specific learning disability if the child does not achieve adequately for the child's age or to meet state-approved grade-level standards… when provided with learning experiences and instruction appropriate for the child's age or state-approved grade-level standards."

The Texas Dyslexia Handbook requires that students be provided systematic, explicit reading instruction before a dyslexia determination — and that failure to respond to such instruction be documented. This is RTI in practice. The Handbook aligns directly with an RTI-based identification approach for dyslexia.

Texas also participates in MTSS initiatives through TEA's Texas MTSS framework, which provides guidance on Tier 1–3 implementation. Districts that implement MTSS with fidelity produce the strongest RTI documentation to complement PSW-based evaluation.

Critical caveat: Texas TAC §89.1040 still requires a full individual evaluation including cognitive assessment, observations, and comprehensive data review — RTI data alone is never sufficient for SLD eligibility in Texas. The RTI documentation strengthens the evaluation; it does not replace it.

SLD — Written Expression
Written Expression & Dysgraphia
Writing competence requires three interdependent systems: transcription (graphomotor execution and orthographic processing), language/composition (idea generation, sentence formulation, organization), and executive functioning/working memory. Any point of breakdown can result in poor written output — but the eligibility determination depends on which system is the primary driver.
✍️
Simple View
Transcription + Language + EF/WM
The Simple View of Writing
✏️ Transcription

Handwriting / Graphomotor Execution — Motor planning, pencil control, and letter formation. A graphomotor deficit means the motor program itself is impaired — not just letter memory.

Orthographic Processing — Storing and retrieving letter forms and sequences from long-term memory. Inconsistent letter production across a sample (same letter formed differently) signals an orthographic retrieval problem, distinct from graphomotor execution.

Key clinical move: Compare copy vs. dictation. If handwriting does not improve when copying from a model, the deficit is primarily graphomotor. If it improves with a model, the deficit is more likely orthographic (memory for letter forms).

Framework: Seaberry, ESC Region 11 (2025); Berninger & Wolf (2009)

💬 Language / Composition

Oral language is the foundation of written composition. Vocabulary knowledge, syntax awareness, narrative organization, and sentence formulation all drive written output quality.

Key clinical move: Compare oral vs. written output. If the student's oral narrative is rich and detailed but written output is limited, the bottleneck is transcription and WM load — not language knowledge. If both oral and written output are limited, the primary weakness is likely language-based.

Implication: SLD-Written Expression and SLD-Oral Expression can coexist. A student who cannot formulate ideas orally will also struggle in writing, but the root cause is language, not graphomotor or orthographic.

Framework: Seaberry, ESC Region 11 (2025); Texas Dyslexia Handbook p. 30

🧠 Executive Functioning / Working Memory

Automaticity of transcription is required before EF can direct energy to composition. When transcription is non-automatic, WM collapses under load — producing reduced fluency, limited organization, and minimal elaboration even when ideas are intact.

WM/EF can be primary or secondary. A student with ADHD may show WM/EF collapse as the primary deficit. A student with dysgraphia may show WM/EF collapse as a secondary effect of transcription overload.

Key question: Do WM/EF weaknesses persist across conditions, or do they emerge only when writing demands increase? Secondary collapse resolves when transcription demands are reduced (e.g., typing, scribe).

Framework: Seaberry, ESC Region 11 (2025); Wolf & Berninger (2018)

Three Eligibility Pathways — Differential Considerations
✍️ SLD — Written Expression (Dysgraphia)

Primary breakdown: Graphomotor execution and/or orthographic processing. Writing remains weak across conditions — even with structure, modeling, extended time, and reduced demands.

CHC profile:

  • Primary: Graphomotor deficit, Orthographic processing (Glr subcomponent), Gc (language knowledge), Gf
  • Secondary/Interactive: Gwm, Gs, EF collapse under task demands

Persistent difficulty with: organization, elaboration, language formulation in writing, spelling accuracy

EF supports do not improve performance. Graphic organizers, chunking, and extended time help marginally but do not resolve the core writing weakness — which distinguishes this from OHI-ADHD.

Texas Dyslexia Handbook criteria (Fig. 5.3):

  • Illegible and/or inefficient handwriting with variably shaped or poorly formed letters
  • Difficulty with unedited written spelling
  • Low volume of written output and problems with other aspects of written expression
  • Resulting from deficit in graphomotor function and/or storing/retrieving orthographic codes
  • Unexpected relative to age and other abilities given effective instruction

Source: Seaberry, ESC Region 11 (2025); Texas Dyslexia Handbook p. 61

OHI — ADHD (Writing Impact)

Primary breakdown: Access and regulation, not learning. Writing difficulties stem from attention, executive self-regulation, and working memory — not from a deficit in writing-specific learning.

CHC profile:

  • Primary: Gs, Gwm
  • Secondary: Glr retrieval fluency
  • Core issue: Access and regulation, not learning

Writing improves with:

  • Structure and chunking
  • Time extensions
  • Verbal rehearsal before writing
  • Frequent check-ins
  • High-interest or preferred topics
Key differentiator: Writing output varies meaningfully by condition. Strong performance on preferred topics or with environmental supports distinguishes OHI-ADHD from SLD-Written Expression, where weaknesses persist regardless of support level.
Note: ADHD deficits often do not appear on cognitive testing due to the structured, 1:1 testing environment. Classroom observations and teacher/parent report are essential data sources.

Source: Seaberry, ESC Region 11 (2025)

💬 SLD — Oral Expression (Writing Impact)

Primary breakdown: Language formulation. Writing difficulties trace back to weak oral language — poor vocabulary retrieval, syntax difficulties, limited narrative organization, or reduced expressive clarity.

CHC profile:

  • Primary: Gc (language knowledge), Glr (word retrieval), syntactic formulation
  • Secondary/Interactive: Gwm, Gs, EF collapse under language load
  • Difficulties are evident even without writing demands

Writing remains weak even with: visual supports, extra processing time, reduced task demands, familiar topics

Persistent difficulty with: word retrieval, sentence formulation, narrative organization, expressive clarity

Key differentiator: Oral language difficulties are evident across speaking and writing contexts. This is not just a writing problem — it is a language problem that manifests in writing. Assess oral expression directly (e.g., WIAT-IV Oral Expression, KTEA-3 Oral Language, clinical observation).

Source: Seaberry, ESC Region 11 (2025)

Response to Supports — Differential Table
Dimension SLD-WE / Dysgraphia OHI — ADHD SLD — Oral Expression
Primary CHC deficit Graphomotor, Orthographic (Glr), Gc, Gf Gs, Gwm — access & regulation Gc, Glr (word retrieval), syntactic formulation
Writing with structure/graphic organizers Marginal improvement — weakness persists Meaningful improvement Marginal improvement — language formulation remains limited
Writing with extended time Marginal improvement Meaningful improvement Minimal improvement
Writing on preferred/high-interest topic Minimal change — weakness consistent across topics Notable improvement Minimal change — language formulation difficulty persists
Oral expression quality Oral output may exceed written — strong oral, weak written Oral output may exceed written; rapid, disorganized oral production Both oral and written limited — language-based across modalities
Copy vs. dictation comparison Copy does not improve handwriting (graphomotor) OR inconsistent letter forms (orthographic) Copy typically improves handwriting quality Copy improves handwriting; composition quality remains limited
Spelling error pattern Orthographic errors (letter sequences, whole-word forms); may also include phonological Inconsistent — careless errors, variable by effort/attention May include morphological and syntactic errors; phonological errors if DLD coexists
Classroom observation indicators Avoids writing, slow output, illegible handwriting, frustration regardless of topic Variable engagement; stronger output with interest, movement, or adult support Difficulty answering open-ended questions orally; simplified language in class discussion
Key assessment data TOC, WIAT-IV AWF/Orthographic Fluency, alphabet fluency, writing samples, copy vs. dictation Conners-4, writing samples across conditions, CBM across settings WIAT-IV OE/LC, KTEA-3 Oral Language, clinical oral language observation
Spelling Error Analysis
🔍 Four Error Types
  • Phonological errors — Difficulty applying sound-symbol correspondences during spelling. The written attempt does not preserve the phonological structure of the word. Consistent with dyslexia/phonological processing deficit.
  • Orthographic errors — Correct phonology but wrong letter sequences or whole-word forms. The student knows how the word sounds but cannot retrieve the correct stored spelling. Consistent with orthographic processing deficit/dysgraphia.
  • Morphological errors — Difficulty applying morphological rules (prefixes, suffixes, base words). Consistent with language-based or DLD profile.
  • Orthographic letter formation errors — Correctly spelled but letters are formed inconsistently or illegibly. Consistent with graphomotor deficit.
Clinical note: A student can show multiple error types simultaneously. Error analysis informs both eligibility framing and instructional targets.

Framework: Seaberry, ESC Region 11 (2025); Berninger & Wolf (2009)

📋 TEA SLD Guidance — Written Expression Data Sources

Informal: Parent/teacher interview, observation, work samples (handwriting, journals, descriptive, narrative, expository, persuasive)

Curriculum-based: Intervention progress monitoring, CBM (story starter + total words written, words spelled correctly, total correct punctuation), comparison to enrolled grade-level standards, teacher-made spelling tests

Criterion-referenced: Writing universal screeners, district writing benchmarks, STAAR RLA/English, TELPAS writing assessment

Norm-referenced: Standardized measures of letter formation, handwriting, word and sentence dictation (timed and untimed), copying, spelling, writing fluency, organization, ideas, grammar, punctuation, structure

TEA guidance (January 2025): Writing is one of the most complex tasks students will be asked to engage in. The MDT should also consider Gwm, Gr, Gf, Gs/Executive Functioning, and Gc when evaluating written expression.

Source: TEA Guidance for the Comprehensive Evaluation of SLD (January 2025)

🤠

Texas Policy & Written Expression

At the state and federal level, students are reported simply as SLD — not as SLD-Basic Reading, SLD-Written Expression, etc. These are not separate disability categories. The purpose of identifying the specific area is to describe patterns of need, guide instruction and services, and support evaluation decisions — not to assign a label.

The Texas Dyslexia Handbook (2024) explicitly addresses dysgraphia within the SLD-Written Expression category. Dysgraphia may be identified when graphomotor and/or orthographic processing deficits are the primary area of need. The MDT must document that the difficulty is unexpected relative to age and other abilities and is not adequately explained by inadequate instruction.

Dual eligibility is appropriate when multiple disabilities contribute to educational need. A student may appropriately qualify under SLD-Written Expression (dysgraphia) + OHI-ADHD when both conditions independently contribute to their writing difficulties. In Lauren C. v. Lewisville ISD (5th Cir. 2018), the court affirmed that IDEA's concern is with whether a student is receiving FAPE, not with disability labels — reinforcing that eligibility determinations should be driven by educational need, not diagnostic category alone.

Sources: Seaberry, ESC Region 11 (2025); Texas Dyslexia Handbook (2024); TEA SLD Guidance (January 2025); Lauren C. v. Lewisville ISD, No. 17-40796 (5th Cir. 2018)

Related Tools
🔀SLD Framework Application Reference 📖SLD Domain Reference 🔤Reading Battery Guide 📋Dyslexia Clinical Reference 📊Score Interpretation Reference ✍️FIE Report Starter
⚠️ Professional Judgment Required — Tools and references on this hub are clinical aids, not substitutes for professional judgment or assessment manuals. Always refer to the administration and technical manuals for each instrument. Eligibility decisions must be made by a qualified multidisciplinary team in accordance with IDEA, Texas TAC §89.1040, and district policy. Barber Sped Hub is an internal diagnostic reference developed for Barber Sped Hub diagnosticians and is not intended as legal, psychological, or medical advice.