IDEA 2004 does not mandate a single method for SLD identification — it prohibits a severe discrepancy as the only criterion and allows RTI and other research-based methods. Texas TAC §89.1040 permits multiple approaches. The frameworks below represent the major current options. PSW has three distinct models used in Texas, each with different criteria, tools, and theoretical grounding.
| Dimension | XBA | PSW | C-SEP | RTI / MTSS |
|---|---|---|---|---|
| Primary basis for SLD | Processing deficits + academic underachievement; CHC construct coverage | Cognitive weakness + academic weakness + cognitive strength pattern | Existing multiple data sources + targeted testing only when data signals a gap; PSW pattern can be documented but existing data, intervention history, and professional judgment are primary — philosophically closer to RTI than PSW | Inadequate response to validated instruction across tiers; academic performance data |
| Theoretical grounding | CHC theory (Cattell-Horn-Carroll); cognitive science | CHC theory + discrepancy logic; multiple PSW models exist (Naglieri, Flanagan-Kaufman, Hale) | RTI-aligned philosophy; CHC theory as organizational structure; multiple data sources as primary evidence; professional judgment; task demands analysis; oral language integration | Behavioral/instructional science; evidence-based practice; data-based decision making |
| Role of IQ/COG battery | Central — must sample all relevant CHC narrow abilities | Central — cognitive pattern is the defining criterion | Central — core battery administered first; selective additions added only when data indicates a gap | Supplementary — not required to establish SLD; supports exclusionary ruling |
| Standardized test burden | High — multiple batteries required to fill construct gaps | High — cognitive + achievement batteries required across domains | Moderate — core battery + selective targeted additions only when warranted; designed to be more efficient than other PSW models | Low to moderate — CBM/progress monitoring + targeted evaluation |
| Required data beyond testing | Observation, history, classroom data (supportive) | Observation, history, classroom data (supportive) | Existing data (Review step) is foundational — screening history, prior records, observations, work samples, parent/teacher input organized before testing begins | Tiered intervention data, fidelity documentation, progress monitoring — all essential |
| Texas TAC alignment | Permitted — no specific mention; aligns with comprehensive evaluation requirements | Permitted — aligns with §89.1040(d) research-based methods | Well-aligned — Texas-developed; aligns with §89.1040 comprehensive evaluation requirements; PSW approach grounded in multiple data sources | Explicitly supported — §89.1040(d) permits RTI; MTSS is state-promoted framework |
| Dyslexia Handbook alignment | Compatible — processing measures required; XBA provides systematic coverage | Compatible — phonological processing deficit fits cognitive weakness criterion | Well-aligned — phonological processing documentation, multiple data sources, and failure-to-respond requirements all addressed within C-SEP framework | Strong alignment — handbook requires documented failure to respond to systematic reading instruction |
| Reliability / validity concerns | Moderate — cross-battery mixing norms is a known limitation; requires careful interpretation | High — PSW pattern criteria vary by model; low inter-rater reliability in research; critiqued for circular reasoning | Moderate — shares some PSW critiques; training required for fidelity; less nationally recognized than RTI but well-established in Texas diagnostician community | Moderate — RTI alone insufficient without comprehensive evaluation; quality depends on fidelity of Tier 2/3 implementation |
| Status in Texas | Active — systematic CHC-based supplement; used when single battery has construct gaps | Active — LDA-endorsed; Texas & TEDA trained; PSW is primary diagnostic approach in Texas | Active — Texas-trained network; 2024 handbook released; growing through TEDA training | Active — IDEA 2004 supported; Texas TAC §89.1040(d) permits; required prior to many evaluations |
| Best suited for | Complex profiles needing construct-level analysis; diagnosticians with CHC training | Cases where cognitive pattern documentation is needed; experienced evaluators | Cases with rich existing data; students with intervention history; diagnosticians trained in C-SEP; districts wanting an RTI-compatible framework that still meets comprehensive evaluation requirements | School-based SLD identification; early intervention contexts; most typical referrals |
IDEA 2004 made three key changes relevant to SLD identification:
- Prohibited states from requiring a severe ability-achievement discrepancy as the sole criterion for SLD identification.
- Permitted (but did not require) the use of a process that examines whether the student responds to scientific, research-based intervention — the RTI/MTSS pathway.
- Permitted the use of "other alternative research-based procedures" — the opening for XBA, PSW, and similar approaches.
- Norm mixing problem: XBA combines subtests from different batteries with different normative samples — a recognized psychometric limitation.
- PSW reliability: Multiple competing PSW models (Naglieri, Flanagan-Kaufman, Hale-Fiorello) produce different classification outcomes on the same student. Inter-rater reliability is low.
- Circular logic critique: PSW's cognitive-achievement discrepancy resembles the discrepancy model IDEA 2004 was designed to move away from.
- Ongoing research debate: Some researchers (Stuebing, Fletcher) have raised concerns about PSW identification rates and consistency across models; proponents (Flanagan, Hale, Dehn) cite the importance of cognitive assessment in identifying the specific processing mechanisms underlying SLD.
- Practical considerations: Both XBA and PSW approaches require specialist training and produce documentation that must be clearly communicated to ARD teams.
Developed by Dawn Flanagan, Samuel Ortiz, and colleagues, XBA applies CHC (Cattell-Horn-Carroll) theory as an organizing framework for selecting and interpreting tests across multiple batteries. The core premise is that any single battery has measurement gaps at the narrow-ability level — XBA fills those gaps by systematically supplementing with subtests from other batteries.
The approach uses "classifying" and "qualifying" criteria: subtests are classified by their CHC construct, and a broad ability is considered adequately measured only when at least two qualitatively different narrow-ability indicators are available from the same normative period.
XBA maps directly onto CHC theory. For SLD identification, the key broad abilities typically assessed are:
- Grw — Reading/Writing (decoding, fluency, comprehension, spelling)
- Ga — Auditory Processing (phonetic coding — core for dyslexia)
- Gsm — Short-Term/Working Memory (memory span, WM capacity)
- Gs — Processing Speed (RAN, fluency tasks)
- Gf — Fluid Reasoning (when ruling out ID or reasoning deficits)
- Gc — Crystallized Intelligence (language, vocabulary — for exclusionary purposes)
The XBA process begins by identifying which CHC broad and narrow abilities are most relevant to the referral question, then systematically selecting a primary battery and supplementing it to ensure adequate construct coverage. Once the cross-battery data is organized — typically using the XBA worksheets or XBASS software — the evaluator examines whether a processing deficit is present, whether it is linked to the observed academic weakness, and whether exclusionary factors have been ruled out.
The key principle is that no single battery fully covers the CHC model at the narrow-ability level. XBA addresses this by treating subtests from any co-normed or similarly-normed battery as interchangeable within CHC constructs — allowing for a more complete assessment picture without redundancy.
- Complex profiles where a single battery clearly doesn't cover the relevant constructs (e.g., WISC-V only for a dyslexia referral with no phonological measures).
- Re-evaluations where prior testing used a different battery and you need to supplement rather than re-administer everything.
- Construct-specific questions — e.g., "Is working memory or processing speed driving the academic deficit more than phonological processing?"
- When you have CHC training and your district is comfortable with the documentation approach.
Core cognitive batteries (choose one as anchor):
Common supplements to fill Ga (phonological) gap:
Achievement batteries (Grw):
Organization tools:
Texas Applicability — XBA
Texas TAC §89.1040 requires a comprehensive individual evaluation that uses a variety of assessment tools and strategies — XBA satisfies this requirement when properly implemented. The state does not prescribe a specific cognitive model, so CHC-based cross-battery analysis is permissible.
The Texas Dyslexia Handbook requires evaluation of phonological processing, which aligns directly with XBA's emphasis on filling Ga construct gaps with CTOPP-2 or equivalent. XBA is most defensible in Texas when the cross-battery additions are clearly documented by CHC construct in the FIE, and cognitive-academic linkages are explicitly stated.
Practical reality: Most Texas diagnosticians are not formally labeling their approach "XBA" — but many are applying XBA principles when they add CTOPP-2, TOC, or TOWRE-2 to supplement a WISC-V or WJ-V COG battery. The XBA framework provides the theoretical justification for why those additions matter.
Processing Strengths and Weaknesses Model (PSWM) and the Psychological Processing Analyzer (PPA) — developed by Milton Dehn.
Operationalizes PSW through CHC-grounded processing constructs with particular emphasis on memory and executive function. The PPA software scores and integrates results across batteries to identify the required discrepancy/consistency pattern.
DD/C Model (Flanagan, Ortiz & Alfonso) — requires dual evidence: a processing weakness that is discrepant from cognitive strengths, AND consistent with (i.e., predicts) the observed academic weakness.
The most formally operationalized PSW model — uses XBA framework and XBASS software for statistical comparison. Closely tied to CHC theory and cross-battery analysis principles.
Core-Selective Evaluation Process — Texas-developed by Dr. Edward Schultz (Midwestern State) and Dr. Tammy Stephens. First published in DiaLog (TEDA's journal) in 2015; updated 2024 handbook.
Although classified as a PSW model, C-SEP's founders align it philosophically with RTI — placing existing data first and using standardized testing selectively. Review → Plan → Assess → Decide.
Despite meaningful differences in operationalization, all three models share a common logical framework:
- Cognitive weakness (W) — at least one processing area that falls significantly below average and below the student's own cognitive strengths
- Academic weakness (Aw) — achievement deficit in the domain linked to the cognitive weakness
- Cognitive strength (S) — at least one cognitive area that remains at or above average, demonstrating the deficit is specific rather than global
- Theoretical link — the cognitive weakness must be theoretically and empirically connected to the academic deficit (e.g., phonological processing → decoding)
- Exclusionary factors — all IDEA/Texas TAC exclusions must be ruled out
PSW has been the subject of significant professional debate since 2010:
- Model inconsistency: Studies show the PSW models classify different students as SLD even with identical data — undermining claims that they measure the same construct.
- Circular reasoning: Requiring a cognitive-achievement discrepancy pattern resembles the ability-achievement discrepancy model IDEA 2004 was designed to move away from.
- Research base: Proponents (Hale, Naglieri, Flanagan) and critics (Fletcher, Stuebing, Miciak) have reached opposite conclusions — the field has not reached consensus.
- LDA endorses PSW: The Learning Disabilities Association of America specifically endorses PSW models as most aligned with current research on SLD, citing that LD is defined as a disorder in basic psychological processes — making cognitive assessment central to identification.
- Texas-specific critique (2025): A peer-reviewed analysis of TEA guidance documents (Pater-Rov, 2025) found that TEA's own SLD guidance continues to reference PSW as an acceptable method despite the lack of evidence supporting it. The review also noted that TEA's guidance describes only RTI and PSW methods, leaving no room for component-based approaches. Pater-Rov recommends removing PSW model references from TEA guidance in favor of language aligned with current evidence-based practice research.
| Feature | Dehn PSWM / PPA | Flanagan DD/C | C-SEP (Schultz & Stephens) |
|---|---|---|---|
| Theoretical base | CHC theory with emphasis on memory & executive function processing constructs | CHC theory — identical broad ability framework as XBA | CHC theory + RTI philosophy; oral language prominently integrated |
| Core requirement | Processing weakness + academic weakness + relative cognitive strength; consistency of weakness, discrepancy from strength | Dual requirement: processing weakness must be discrepant from strengths AND consistent with (predictive of) academic deficit | Review of existing data → core battery → selective additions; PSW pattern integrated with all data sources via professional judgment |
| Decision process | PPA software scores and integrates cross-battery data algorithmically | XBASS software (or manual XBA worksheets) — statistical discrepancy/consistency analysis | Professional judgment throughout; structured worksheet supports data triangulation; not algorithm-driven |
| Software / tools | PPA (required) | XBASS (strongly recommended) | No proprietary software required — C-SEP worksheets are free |
| Test burden | Moderate-high — multiple batteries often needed to fill construct gaps for PPA | High — XBA fill-in approach requires systematic supplementation | Designed to be efficient — core battery + selective additions only; lowest burden of the three |
| Texas familiarity | Moderate — used by some diagnosticians; PPA less widely known than DD/C | Moderate — well-known but requires XBA training | High — Texas-developed; trained extensively through TEDA; 2024 handbook |
| Philosophical alignment | Traditional PSW — begins with cognitive testing | Traditional PSW — begins with cognitive testing; tightly XBA-integrated | RTI-aligned philosophy — begins with existing data; testing is selective |
Texas Applicability — PSW Generally
Texas TAC §89.1040(d) permits "other research-based procedures" for SLD identification, which encompasses all three PSW models. The state does not mandate a specific model. All three are legally permissible in Texas when properly implemented with comprehensive evaluation data, documented exclusionary factors, and multiple data sources.
The 2025 TEDA conference explicitly identified all three models — Dehn PSWM, Flanagan DD/C, and C-SEP — as the three most commonly used PSW approaches in Texas, reflecting their growing presence in Texas diagnostic practice.
Critical caveat: When using any PSW model, document which model you are applying and state your criteria explicitly. Vague references to "a pattern of strengths and weaknesses" without specifying the model are difficult to defend at due process.
A 2025 peer-reviewed analysis (Pater-Rov) found that TEA's guidance documents have improved over time but still contain inconsistencies — including continued reference to PSW approaches despite the lack of a strong evidence base, and the absence of component-based models as an option. Texas diagnosticians should be aware that the state's own policy documents are a work in progress. Use multiple data sources, document your reasoning, and do not rely on cognitive pattern alone.
Milton Dehn developed the PSWM as a CHC-grounded approach that operationalizes PSW criteria using standardized cognitive and academic test data. Dehn's framework places particular emphasis on cognitive processing constructs — especially working memory (Gsm), processing speed (Gs), and executive function — as the mechanisms underlying most SLD presentations.
The model requires the evaluator to:
- Identify at least one processing weakness (W) below a specified cutpoint
- Identify at least one cognitive strength (S) that is both above the cutpoint and significantly discrepant from the weakness
- Confirm a corresponding academic weakness (Aw) in the same functional domain
- Demonstrate the cognitive weakness is consistent with the academic weakness and discrepant from the cognitive strength
The Psychological Processing Analyzer (PPA) is scoring software developed by Dehn that integrates subtest and composite scores from multiple batteries and applies the PSWM criteria algorithmically. It is the primary implementation tool for this model.
Key features of the PPA:
- Accepts scores from multiple cognitive batteries (WISC-V, WJ-V COG, KABC-II, CAS-2, and others)
- Classifies each subtest by CHC broad and narrow ability
- Identifies which subtests meet "strength" vs. "weakness" criteria based on normative and intra-individual comparison
- Determines whether the required discrepancy/consistency pattern is present
- Generates a report-ready summary of the PSW analysis
- When you have access to the PPA and training in Dehn's processing framework — the software is what makes this model distinctly more systematic than informal PSW application.
- Working memory or processing speed is a central concern — Dehn's model is especially well-suited to cases where Gsm or Gs deficits are driving academic difficulties, such as math disability or written expression.
- When RTI data is insufficient and you want a PSW model with a clearly documented algorithmic decision process.
- Complex cognitive profiles where intra-individual comparison is needed to identify specific weaknesses that aren't below average normatively but are significantly lower than the student's own cognitive ceiling.
Cognitive batteries (enter scores into PPA):
Key processing constructs targeted (Dehn emphasis):
Achievement batteries:
Software:
Texas Applicability — Dehn PSWM
The Dehn PSWM model is permissible under Texas TAC §89.1040(d) as an "other research-based procedure" for SLD identification. The PPA's algorithmic pattern analysis provides clear documentation of the discrepancy/consistency criteria — which can strengthen defensibility compared to informal PSW interpretation.
Named explicitly at the 2025 TEDA conference as one of the three PSW models most commonly used in Texas. Diagnosticians using this model should document that the Dehn PSWM was the specific approach applied, specify the PPA version used, and explicitly state the criteria that were met.
The Dual Discrepancy/Consistency (DD/C) model was developed by Dawn Flanagan, Samuel Ortiz, and Vincent Alfonso as a PSW approach tightly integrated with CHC theory and the XBA framework. It is sometimes called the "Flanagan-Kaufman" operational definition in older literature, though the DD/C label is now standard.
The model is defined by two simultaneous requirements:
- Discrepancy: The processing weakness (W) is significantly lower than the cognitive strength (S) — the student has both high and low cognitive abilities, not a uniformly flat profile.
- Consistency: The processing weakness (W) is statistically consistent with (not significantly different from) the academic deficit (Aw) — the academic performance is no better than what the processing profile would predict.
This dual criterion is intended to address the "false positive" problem in earlier PSW models — requiring not just that a weakness and an academic deficit co-exist, but that they are statistically linked in a specific directional pattern.
DD/C and XBA are deeply intertwined — they share the same authors, the same CHC theoretical framework, and the same XBASS software. The key distinction:
- XBA is a framework for systematic assessment — it ensures construct coverage by filling gaps across batteries.
- DD/C is a decision model — it specifies the statistical pattern (discrepancy + consistency) required to identify SLD.
In practice, an evaluator using DD/C will first apply XBA principles to ensure adequate construct coverage, then use XBASS to run the discrepancy/consistency analysis on the assembled data set.
- When you have XBA training and XBASS access — the model requires both to implement with fidelity.
- Complex profiles needing statistical documentation of the cognitive-academic link — DD/C's dual criterion provides the most formal statistical support of any PSW model.
- Cases where RTI data is unavailable and you need a rigorous alternative-research-based pathway with explicit documentation of discrepancy and consistency.
- Private/clinical evaluations where a thorough cognitive-academic analysis is expected and time is not the primary constraint.
Core cognitive batteries (choose one as XBA anchor):
Supplements to fill Ga (phonological) gap:
Achievement batteries (Grw/Gq):
Software & frameworks:
Texas Applicability — Flanagan DD/C
The DD/C model is permissible under Texas TAC §89.1040(d) as a research-based procedure for SLD identification. Its statistical documentation of discrepancy and consistency provides a strong evidentiary foundation when properly implemented with XBASS. Named explicitly at the 2025 TEDA conference as one of the three PSW models most commonly used in Texas.
The Texas Dyslexia Handbook requirement for phonological processing documentation aligns with DD/C's CHC-based approach to identifying Ga weakness as the cognitive deficit. When using DD/C for dyslexia identification, the CTOPP-2 or equivalent provides the Ga cluster data needed for both the discrepancy (vs. other cognitive strengths) and consistency (vs. reading achievement) analyses.
Documentation tip: In the FIE, name the model explicitly ("Dual Discrepancy/Consistency model, Flanagan, Ortiz & Alfonso"), cite the XBASS analysis, and state which discrepancy and consistency criteria were met with the specific scores.
The Core-Selective Evaluation Process (C-SEP) was developed by Dr. Edward Schultz (Midwestern State University) and Dr. Tammy Stephens, both Texas-based educators with deep roots in educational diagnostics. C-SEP was first introduced in the DiaLog (the journal of the Texas Educational Diagnosticians' Association) in 2015 and has been refined through ongoing field feedback, training, and a 2024 updated handbook.
Although C-SEP can be used to document a pattern of strengths and weaknesses, its founders — particularly Dr. Schultz — have stated in training that C-SEP's philosophy aligns much more closely with the RTI model than with traditional PSW. The framework places existing data and multiple sources of information at the center of the evaluation process, with standardized testing added selectively only when that data demands it. This stands in contrast to PSW models that begin with cognitive testing and look for a predetermined pattern.
Its defining characteristic is the core-then-selective structure: evaluators begin with a thorough Review of all existing data, then — if needed — administer core tests from an established battery (WJ, Wechsler, or Kaufman), adding targeted selective assessments only when the data signals a gap that must be filled to answer the referral question.
C-SEP is grounded in CHC (Cattell-Horn-Carroll) theory and uses norm-referenced cognitive, achievement, and oral language measures. It integrates multiple sources of data with individualized assessment to identify a pattern of strengths and weaknesses.
Key theoretical elements:
- CHC-organized constructs — cognitive and academic abilities are understood through CHC broad and narrow ability framework
- Multiple data sources — standardized testing is integrated with existing data, observations, work samples, history, and input forms
- Oral language integration — C-SEP explicitly includes oral language measures alongside cognitive and achievement batteries, recognizing oral language as foundational to academic skill development
- Professional judgment — C-SEP emphasizes evaluator professional judgment throughout the process rather than algorithmic score comparison
- Task demands analysis — a structured examination of what skills a task actually requires, used during the decision phase to interpret score patterns meaningfully
C-SEP moves through four sequential phases. The framework is designed so that each phase informs the next — existing data shapes the assessment plan, assessment results inform the decision, and professional judgment operates throughout.
- When RTI data is limited or unavailable — student transferred, was homeschooled, or the district's MTSS system doesn't have adequate documentation. C-SEP provides a comprehensive, legally defensible alternative pathway.
- When existing data is rich — C-SEP's Review step is particularly powerful when there is a lot of existing data (DIBELS history, prior evals, grades, teacher input). It leverages that data rather than defaulting to a full battery immediately.
- When test burden is a concern — C-SEP's selective approach is well-suited for students who have already been tested extensively, for re-evaluations, or for younger students where long testing sessions are impractical.
- For diagnosticians trained in C-SEP — the structured 4-step process and accompanying worksheets make it a practical everyday framework, not just a theoretical model.
- Complex referrals with multiple data sources — C-SEP's data triangulation and task demands analysis steps are well-suited for cases where scores alone don't tell the full story.
Core cognitive batteries (start here — choose one):
Core achievement batteries:
Oral language measures (key C-SEP component):
Selective additions — added only when core data indicates need:
Existing data sources — central to the Review step:
C-SEP resources:
Texas Applicability — C-SEP
C-SEP is a Texas-developed framework — created by Dr. Edward Schultz (Midwestern State University, Wichita Falls) and Dr. Tammy Stephens, both with deep ties to Texas special education. It was first published in the DiaLog, the journal of the Texas Educational Diagnosticians' Association (TEDA), and has been widely trained in Texas school districts.
C-SEP aligns with Texas TAC §89.1040 as a comprehensive, research-based evaluation approach that uses multiple data sources and addresses all required SLD criteria. Its emphasis on existing data review, targeted assessment planning, and professional judgment is well-suited to Texas diagnosticians' daily practice.
The Texas Dyslexia Handbook requirements — phonological processing documentation, multiple data sources, documentation of failure to respond — are all addressed within the C-SEP framework. C-SEP is one of the most practical and contextually appropriate PSW-based approaches for Texas educational diagnosticians, particularly those who have completed C-SEP training.
Training available: Dr. Schultz and Dr. Stephens offer face-to-face, distance, and on-demand training for both individual evaluators and district-wide implementation. Free "Beyond the Score" webinars are offered periodically. Visit csep.online for current training offerings.
RTI (Response to Intervention) / MTSS (Multi-Tiered System of Supports) uses a student's response — or failure to respond — to evidence-based instruction as a central component of SLD identification. Rather than looking for a cognitive-academic discrepancy, RTI asks: Did the student fail to make adequate progress despite high-quality, fidelity-implemented instruction at increasingly intensive levels?
Three-tier structure:
- Tier 1: High-quality core instruction for all students. Universal screening (DIBELS, AIMSweb+) identifies at-risk students. ~80% of students should be successful at Tier 1.
- Tier 2: Supplemental, targeted small-group intervention (3–4x/week, ~30 min). Progress monitoring every 1–2 weeks. ~15% of students may need Tier 2.
- Tier 3: Intensive, individualized intervention. Frequent progress monitoring. Students who fail to respond here are primary candidates for special education evaluation. ~5% of students.
IDEA 2004, 34 CFR §300.307–300.311:
- States may NOT require a severe discrepancy between IQ and achievement as the sole criterion for SLD.
- States must permit the use of a process based on the student's response to scientific, research-based intervention.
- States may permit the use of other alternative research-based procedures.
NASP Position (2011, reaffirmed 2020):
- Endorses RTI/MTSS as the preferred SLD identification model.
- Does not endorse PSW as a validated primary identification method.
- Emphasizes that RTI must be coupled with a comprehensive evaluation — not used as a standalone SLD determination.
As a diagnostician, you are not running the RTI process — you are auditing whether it was run well. A useful organizing lens for this audit is the RIOT/ICEL framework (Wright, 2010), which describes the breadth of data that should be present in a strong RTI file before a referral reaches your desk.
RIOT — Four data source types the campus should have drawn from:
ICEL — Four domains the data should address:
Framework: Wright, J. (2010). The RIOT/ICEL matrix: Organizing data to answer questions about student academic performance & behavior. How RTI Works Series. interventioncentral.org
- All school-based SLD referrals — RTI/MTSS data should be part of every evaluation, documenting response to instruction before or alongside comprehensive PSW-based assessment.
- Dyslexia identification — the Texas Dyslexia Handbook explicitly requires documented failure to respond to systematic, explicit reading instruction. RTI data is the most direct evidence of this.
- Early elementary referrals (K–2) — RTI/MTSS data is especially powerful here because intervention is recent, progress monitoring is frequent, and the "failure to respond" picture is clear.
- Re-evaluations — ongoing progress monitoring data from special education services provides the same kind of RTI documentation for continued eligibility.
Universal screening & progress monitoring (RTI data):
Comprehensive evaluation — cognitive:
Comprehensive evaluation — processing (complements RTI):
Comprehensive evaluation — achievement:
Intervention programs (Tier 2/3 fidelity documentation):
Texas Applicability — RTI / MTSS
Texas TAC §89.1040(d) explicitly permits the use of RTI as part of SLD identification: "A group may determine that a child has a specific learning disability if the child does not achieve adequately for the child's age or to meet state-approved grade-level standards… when provided with learning experiences and instruction appropriate for the child's age or state-approved grade-level standards."
The Texas Dyslexia Handbook requires that students be provided systematic, explicit reading instruction before a dyslexia determination — and that failure to respond to such instruction be documented. This is RTI in practice. The Handbook aligns directly with an RTI-based identification approach for dyslexia.
Texas also participates in MTSS initiatives through TEA's Texas MTSS framework, which provides guidance on Tier 1–3 implementation. Districts that implement MTSS with fidelity produce the strongest RTI documentation to complement PSW-based evaluation.
Critical caveat: Texas TAC §89.1040 still requires a full individual evaluation including cognitive assessment, observations, and comprehensive data review — RTI data alone is never sufficient for SLD eligibility in Texas. The RTI documentation strengthens the evaluation; it does not replace it.
Handwriting / Graphomotor Execution — Motor planning, pencil control, and letter formation. A graphomotor deficit means the motor program itself is impaired — not just letter memory.
Orthographic Processing — Storing and retrieving letter forms and sequences from long-term memory. Inconsistent letter production across a sample (same letter formed differently) signals an orthographic retrieval problem, distinct from graphomotor execution.
Framework: Seaberry, ESC Region 11 (2025); Berninger & Wolf (2009)
Oral language is the foundation of written composition. Vocabulary knowledge, syntax awareness, narrative organization, and sentence formulation all drive written output quality.
Key clinical move: Compare oral vs. written output. If the student's oral narrative is rich and detailed but written output is limited, the bottleneck is transcription and WM load — not language knowledge. If both oral and written output are limited, the primary weakness is likely language-based.
Framework: Seaberry, ESC Region 11 (2025); Texas Dyslexia Handbook p. 30
Automaticity of transcription is required before EF can direct energy to composition. When transcription is non-automatic, WM collapses under load — producing reduced fluency, limited organization, and minimal elaboration even when ideas are intact.
WM/EF can be primary or secondary. A student with ADHD may show WM/EF collapse as the primary deficit. A student with dysgraphia may show WM/EF collapse as a secondary effect of transcription overload.
Framework: Seaberry, ESC Region 11 (2025); Wolf & Berninger (2018)
Primary breakdown: Graphomotor execution and/or orthographic processing. Writing remains weak across conditions — even with structure, modeling, extended time, and reduced demands.
CHC profile:
- Primary: Graphomotor deficit, Orthographic processing (Glr subcomponent), Gc (language knowledge), Gf
- Secondary/Interactive: Gwm, Gs, EF collapse under task demands
Persistent difficulty with: organization, elaboration, language formulation in writing, spelling accuracy
Texas Dyslexia Handbook criteria (Fig. 5.3):
- Illegible and/or inefficient handwriting with variably shaped or poorly formed letters
- Difficulty with unedited written spelling
- Low volume of written output and problems with other aspects of written expression
- Resulting from deficit in graphomotor function and/or storing/retrieving orthographic codes
- Unexpected relative to age and other abilities given effective instruction
Source: Seaberry, ESC Region 11 (2025); Texas Dyslexia Handbook p. 61
Primary breakdown: Access and regulation, not learning. Writing difficulties stem from attention, executive self-regulation, and working memory — not from a deficit in writing-specific learning.
CHC profile:
- Primary: Gs, Gwm
- Secondary: Glr retrieval fluency
- Core issue: Access and regulation, not learning
Writing improves with:
- Structure and chunking
- Time extensions
- Verbal rehearsal before writing
- Frequent check-ins
- High-interest or preferred topics
Source: Seaberry, ESC Region 11 (2025)
Primary breakdown: Language formulation. Writing difficulties trace back to weak oral language — poor vocabulary retrieval, syntax difficulties, limited narrative organization, or reduced expressive clarity.
CHC profile:
- Primary: Gc (language knowledge), Glr (word retrieval), syntactic formulation
- Secondary/Interactive: Gwm, Gs, EF collapse under language load
- Difficulties are evident even without writing demands
Writing remains weak even with: visual supports, extra processing time, reduced task demands, familiar topics
Persistent difficulty with: word retrieval, sentence formulation, narrative organization, expressive clarity
Source: Seaberry, ESC Region 11 (2025)
| Dimension | SLD-WE / Dysgraphia | OHI — ADHD | SLD — Oral Expression |
|---|---|---|---|
| Primary CHC deficit | Graphomotor, Orthographic (Glr), Gc, Gf | Gs, Gwm — access & regulation | Gc, Glr (word retrieval), syntactic formulation |
| Writing with structure/graphic organizers | Marginal improvement — weakness persists | Meaningful improvement | Marginal improvement — language formulation remains limited |
| Writing with extended time | Marginal improvement | Meaningful improvement | Minimal improvement |
| Writing on preferred/high-interest topic | Minimal change — weakness consistent across topics | Notable improvement | Minimal change — language formulation difficulty persists |
| Oral expression quality | Oral output may exceed written — strong oral, weak written | Oral output may exceed written; rapid, disorganized oral production | Both oral and written limited — language-based across modalities |
| Copy vs. dictation comparison | Copy does not improve handwriting (graphomotor) OR inconsistent letter forms (orthographic) | Copy typically improves handwriting quality | Copy improves handwriting; composition quality remains limited |
| Spelling error pattern | Orthographic errors (letter sequences, whole-word forms); may also include phonological | Inconsistent — careless errors, variable by effort/attention | May include morphological and syntactic errors; phonological errors if DLD coexists |
| Classroom observation indicators | Avoids writing, slow output, illegible handwriting, frustration regardless of topic | Variable engagement; stronger output with interest, movement, or adult support | Difficulty answering open-ended questions orally; simplified language in class discussion |
| Key assessment data | TOC, WIAT-IV AWF/Orthographic Fluency, alphabet fluency, writing samples, copy vs. dictation | Conners-4, writing samples across conditions, CBM across settings | WIAT-IV OE/LC, KTEA-3 Oral Language, clinical oral language observation |
- Phonological errors — Difficulty applying sound-symbol correspondences during spelling. The written attempt does not preserve the phonological structure of the word. Consistent with dyslexia/phonological processing deficit.
- Orthographic errors — Correct phonology but wrong letter sequences or whole-word forms. The student knows how the word sounds but cannot retrieve the correct stored spelling. Consistent with orthographic processing deficit/dysgraphia.
- Morphological errors — Difficulty applying morphological rules (prefixes, suffixes, base words). Consistent with language-based or DLD profile.
- Orthographic letter formation errors — Correctly spelled but letters are formed inconsistently or illegibly. Consistent with graphomotor deficit.
Framework: Seaberry, ESC Region 11 (2025); Berninger & Wolf (2009)
Informal: Parent/teacher interview, observation, work samples (handwriting, journals, descriptive, narrative, expository, persuasive)
Curriculum-based: Intervention progress monitoring, CBM (story starter + total words written, words spelled correctly, total correct punctuation), comparison to enrolled grade-level standards, teacher-made spelling tests
Criterion-referenced: Writing universal screeners, district writing benchmarks, STAAR RLA/English, TELPAS writing assessment
Norm-referenced: Standardized measures of letter formation, handwriting, word and sentence dictation (timed and untimed), copying, spelling, writing fluency, organization, ideas, grammar, punctuation, structure
Source: TEA Guidance for the Comprehensive Evaluation of SLD (January 2025)
Texas Policy & Written Expression
At the state and federal level, students are reported simply as SLD — not as SLD-Basic Reading, SLD-Written Expression, etc. These are not separate disability categories. The purpose of identifying the specific area is to describe patterns of need, guide instruction and services, and support evaluation decisions — not to assign a label.
The Texas Dyslexia Handbook (2024) explicitly addresses dysgraphia within the SLD-Written Expression category. Dysgraphia may be identified when graphomotor and/or orthographic processing deficits are the primary area of need. The MDT must document that the difficulty is unexpected relative to age and other abilities and is not adequately explained by inadequate instruction.
Dual eligibility is appropriate when multiple disabilities contribute to educational need. A student may appropriately qualify under SLD-Written Expression (dysgraphia) + OHI-ADHD when both conditions independently contribute to their writing difficulties. In Lauren C. v. Lewisville ISD (5th Cir. 2018), the court affirmed that IDEA's concern is with whether a student is receiving FAPE, not with disability labels — reinforcing that eligibility determinations should be driven by educational need, not diagnostic category alone.
Sources: Seaberry, ESC Region 11 (2025); Texas Dyslexia Handbook (2024); TEA SLD Guidance (January 2025); Lauren C. v. Lewisville ISD, No. 17-40796 (5th Cir. 2018)