Barber Sped Hub
Data Integration Reference
C-SEP Workflow · Multiple Sources · Pattern vs. Profile · Preponderance of Evidence
← Hub
Comprehensive Evaluation Practice

Data Integration & Pattern Seeking

Data integration is the process of organizing, converging, and synthesizing information from multiple data sources to create a comprehensive, defensible understanding of a student's strengths and weaknesses. It is legally required — and it begins before formal testing, not after.

"It's a learning disability, not a testing disability."

— Tammy L. Stephens, Ph.D., C-SEP & Bosco K12 (TEDA 2026)  ·  Learning data is required; norm-referenced testing is supplementary and used on a case-by-case basis.
C-SEP Critical Steps — The Workflow Framework
REVIEW existing data PLAN assessment ASSESS student needs DECIDE & document C-SEP cycle
🔄 Review → Plan → Assess → Decide
1
REVIEW — Don't skip this step. Collect, organize, and review existing educational information. Multiple sources of data are used to: clarify the referral question, establish underachievement, assess exclusionary factors, review instructional history, create a testing hypothesis, and identify initial patterns of academic strengths and weaknesses. This is where data integration and pattern-seeking begin. The Review step is not optional — it is legally required and determines whether formal testing is even necessary.
2
PLAN — Targeted, legally defensible assessment plan Using data gathered in the Review step, develop a targeted testing plan based on the hypothesis generated from existing data, the referral question, what is known about the construct area, and the individual student. What data is missing? What follow-up is needed? Targeted testing only — not a fixed battery.
3
ASSESS — Targeted and purposeful Core and selective tests administered based on the Plan. Conduct classroom observation in the area of concern AND in an area of relative strength. Document testing observations — approach, effort, stamina, strategy use, fatigue, response to redirection. Merge NRT data with MSD to determine if the referral question can be answered.
4
DECIDE — Triangulation and professional judgment Data are merged and analyzed to determine if a PSW exists. Professional judgment plays a key role. Establish if a PSW is evident and consistent with policy. Conduct task demand analysis. Cross-validate findings across sources. A score report alone cannot establish a pattern.

Source: Stephens & Schultz, C-SEP (2015/2024); TEA SLD Guidance (2025)

Multiple Sources of Data — Balanced Review
📂 Informal Data

Fills the gaps that standardized tests miss. Required by policy.

  • Record review — prior evaluations, 504 plans, RTI/MTSS data, attendance, discipline, medical records, language proficiency
  • Parent interview — developmental history, family history, medical history, academic history, behavior/SE, home environment, parent concerns and strengths
  • Teacher interview — academic performance, behavioral observations, functional/EF, communication, interventions tried, strengths
  • Student interview — what is hard, what helps, self-perception
  • Classroom observation(s) — in area of concern AND in area of relative strength; peer comparison; classroom language demands
  • Work samples — handwriting, journals, essays, narrative, expository; error analysis
  • Testing observations — approach, effort, stamina, strategy use, fatigue effects, response to support

Source: Stephens & Schultz, C-SEP (2015/2024)

📈 Curriculum-Based & Criterion-Referenced

CBM (Curriculum-Based Measurement) — standardized, short-duration fluency measures. Academic "thermometers" designed to monitor student growth in basic skills. Formative.

  • DIBELS, AIMSweb, EasyCBM — oral reading fluency, phonemic awareness
  • Math computation probes — digits correct per minute
  • Writing CBM — total words written, words spelled correctly, total correct punctuation
  • Universal screeners, benchmark assessments (fall/winter/spring)
  • Progress monitoring data during intervention

Criterion-Referenced / Criterion-Based Assessment (CBA) — mastery measurement using curriculum content. Summative. Includes:

  • STAAR (with raw score conversion for performance level context)
  • TELPAS
  • District benchmarks using released STAAR
  • iReady diagnostic reports
  • Chapter tests, portfolio assessments
CBM provides data points for establishing S/W patterns but more formal assessment may be needed for diagnostic information. CBM is a different type of data — integrate it with, not instead of, other sources.

Source: Stephens & Schultz, C-SEP (2015/2024); Shinn (1998)

📊 Norm-Referenced Data

With C-SEP, evaluators fully exploit the information obtained from NRTs — not just composite scores. Four levels of information:

  • Level 1 — Qualitative: Test session observations, error analysis, informal behavioral notes. Most useful for instructional planning.
  • Level 2 — Developmental level: Age equivalents, grade equivalents, level of instruction. Shows where student is functioning developmentally.
  • Level 3 — Proficiency level: RPI (WJ-V), CALP, easy-to-difficult range, developmental/instructional zone.
  • Level 4 — Relative standing: Standard scores, percentile ranks, confidence intervals, discrepancy PR/SD. Shows position relative to peers.
Cognitive testing is optional, not required. TEA SLD guidance (2025): norm-referenced cognitive testing is used on a case-by-case basis when it adds value beyond existing data. Learning data is required; NRT is supplementary.

Source: Stephens & Schultz, C-SEP (2015/2024); TEA SLD Guidance (2025)

Pattern vs. Profile — A Critical Distinction
🔗 Pattern — What We're Building Toward

A pattern refers to consistent, meaningful links across cognitive, academic, functional, and observational data that explain a student's learning strengths and weaknesses.

  • A consistent relationship observed between cognitive processing weaknesses (or strengths) and academic performance across multiple data sources
  • Shows cause-and-effect connections
  • Focuses on relationships, not isolated scores
  • Explains why the student struggles
Example of a pattern: Phonological awareness (CTOPP-2 PA: 72) → decoding (WIAT-IV PD: 74) → oral reading fluency (DIBELS ORF: 42 WCPM, well below benchmark) → reading comprehension (WIAT-IV RC: 79) → teacher reports reading avoidance and frustration. Parent reports family history of reading difficulty. This convergence of data across sources = a pattern.

Source: Stephens & Schultz, C-SEP (2015/2024)

📸 Profile — What We're Avoiding

A profile refers to a student's collection of scores, behaviors, or skill levels without necessarily identifying relationships among the skills.

  • A description or snapshot of abilities at a single point in time
  • Shows what scores look like but does not explain how or why
  • Descriptive, not interpretive
  • Does not explain why the student struggles
Common mistake: Listing scores in a report without connecting them to each other or to real-world performance. A score report alone cannot establish a pattern. Score report data must be merged with other data sources before a pattern can be documented.
C-SEP does not establish patterns based solely on discrepancy scores. Discrepancy between composite scores is one data point — not a pattern.

Source: Stephens & Schultz, C-SEP (2015/2024)

Chain of Evidence Technique
🔗 Three Links

Source: Schultz & Stephens, C-SEP (2015/2024)

Cross-Validation Checklist

Before finalizing eligibility, ask:

  • What patterns emerged from the historical formal and informal data?
  • How do the results on formal measures fit into the larger performance profile of the student?
  • Is there more than one data source indicating the same strength and/or weakness?
  • How do data findings align to the referral question?
  • Does the student's cognitive and/or language PSW align to the achievement PSW in a logical, research-based manner?
  • Were exclusionary factors considered and ruled out as the primary cause of the academic deficit?
  • Did other evaluators come to the same conclusions? (Best practice: use a problem-solving team approach to validate findings with other evaluators.)
Explain inconsistencies. When data sources conflict, do not ignore the outlier — document it, explore it, and explain why the preponderance of data still supports (or does not support) the eligibility determination.

Source: Stephens & Schultz, C-SEP (2015/2024)

Preponderance of Data
⚖️ What It Means

"Preponderance of data" refers to the greater weight of credible evidence gathered across multiple sources that points consistently toward a particular conclusion about a student's strengths and weaknesses.

  • It is not about "proving beyond a shadow of a doubt"
  • It is about having more quality evidence supporting a conclusion than opposing it
  • It is a qualitative judgment, not just a number tally
  • It requires multiple types of data — testing, observations, interviews, progress monitoring, records review
  • Data should converge across sources — e.g., a reading weakness shown in testing, CBMs, classroom observations, and teacher and parent report
If the preponderance of evidence shows a consistent weakness impacting learning, an eligibility decision can be supported — even if one or two data points are inconsistent. Document the inconsistency and explain it.

Source: Stephens & Schultz, C-SEP (2015/2024); TEA SLD Guidance (2025)

📝 FIE Language — Area Intact Example

"Through the collection and analysis of multiple sources of data gathered as part of the assessment process, results indicate that [Student]'s reading skills are intact, and Reading is not suspected as an area of disability. Data collected over time indicates a clear pattern of intact abilities for [Student] in the area of reading. [Student] has a history of passing grades in reading, has met standards on the reading STAAR test for consecutive years, and has passed all dyslexia screeners since Kindergarten. [Student] has also passed all benchmarks in the area of reading. [Student]'s teacher rates overall Basic Reading/Decoding, Oral Reading/Fluency, and Reading Comprehension skills in the above average range. Consequently, there is no need to conduct formal norm-referenced testing in the area of reading."

Model adapted from Stephens & Schultz, C-SEP (2015/2024)

Observation Requirements & Best Practices
👁️ TEA Observation Requirements
  • Observe in the learning environment in areas of concern
  • Use pre-referral observation data OR conduct observation post-consent
  • Consider observing during intervention to document response and strategy use
  • Describe tasks, behaviors, and peer comparison — not just student behavior in isolation
  • Observe in an area of relative strength as well — discordance is part of the PSW pattern
  • Document classroom language demands — what is the linguistic complexity of instruction?
"Assessment without observation is interpretation without context." (Paraphrase of C-SEP principles; Stephens & Schultz, C-SEP, 2015/2024)

Source: TEA SLD Guidance (2025); Stephens & Schultz, C-SEP (2015/2024)

🔬 Testing Observations — The "How," Not Just the "What"

Testing observations go beyond describing what scores the student earned — document how the student approached the tasks:

  • Approach to novel tasks — methodical, impulsive, avoidant
  • Effort, motivation, anxiety, and stamina
  • Problem-solving style — strategies vs. guessing
  • Attention and impulsivity patterns
  • Response to redirection or examiner support
  • Fatigue effects and need for breaks
  • Performance differences across task types (timed vs. untimed, verbal vs. visual)
Example: "During the Word Reading subtest, [Student] appeared cooperative but hesitant. [He/She] guessed words based on the first letter on more difficult items (e.g., reading 'plan' as 'play'), occasionally re-reading words under breath. Pace was noticeably slower than expected for age."

Source: Stephens & Schultz, C-SEP (2015/2024)

🎯 Turning Observations Into Evidence
  • Describe observable actions — not interpretations. "Student erased six times in two sentences" rather than "student was frustrated."
  • Connect behaviors to skill domains — link what you saw to what it suggests about the area of concern
  • Pair notes with real examples — specific behavioral instances are more defensible than general statements
  • Integrate with interviews and test data — observations gain meaning when consistent with other sources
  • Link to access, participation, and progress — connect the observation to educational impact
Report language model: "During observation, [Student]… This aligns with teacher report indicating… and is consistent with formal assessment findings showing…"

Source: Stephens & Schultz, C-SEP (2015/2024)

When Scores Don't Show the Full Story — Well-Compensated Presentations
🔍 What "Well-Compensated" Means

A well-compensated presentation occurs when a student has a genuine learning disability — most commonly dyslexia — but norm-referenced scores in the affected area fall within the average range, masking the underlying profile. This happens because:

  • Intensive, sustained intervention worked — years of Tier II/III support built functional skills that show up as average scores. The disability did not disappear; the student learned to compensate.
  • Strong cognitive strengths compensate — high verbal reasoning or working memory scaffolds performance on tasks that would otherwise reveal the deficit.
  • Test ceiling effects — many reading measures have limited sensitivity to subtle processing differences once basic decoding is functional.
  • Speed and accuracy tradeoffs — a student may decode accurately but only under untimed conditions; timed fluency measures (TOWRE, DIBELS ORF) are more sensitive.
The disability is not absent — it is managed. Average scores in the context of years of intensive intervention, family history, phonological processing findings, and persistent characteristics reflect educational benefit, not the absence of the underlying pattern.

See also: Texas Dyslexia Handbook (2024); TEA TAA Letter on Dyslexia Services (2018)

⚖️ Which Data Carries the Most Weight

In well-compensated cases, Links 1 and 2 of the chain of evidence become the most diagnostic. Norm-referenced scores (Link 3) may look typical — but they are still interpreted in context, not in isolation.

  • Family history — documented dyslexia in parents or siblings is one of the strongest single predictors. Ask specifically and record it.
  • Intervention history and duration — how many years of Tier II/III? What was the intensity and fidelity? What was the initial severity before intervention?
  • Early vs. current data — compare Kindergarten/1st grade screeners and CBM to current performance. Growth that required sustained intensive support is meaningful data.
  • Timed fluency measures — TOWRE-2 (TOSREC, SWE), DIBELS ORF, and rapid naming composites are more sensitive to residual processing differences than untimed word reading tasks. Subtle weaknesses often persist here.
  • Phonological processing subtests — CTOPP-2 Phonological Memory and RAN subtests, even when the PA composite is average, may show subtle but meaningful patterns.
  • What happens at grade-level demand increases — students who compensate adequately in 2nd–3rd grade sometimes show re-emergence of difficulty in 5th–7th when text complexity, writing demands, and reading volume escalate.
  • Student and parent report — how hard is reading? Does the student avoid it? How much effort does typical reading take? Compensation has a cost.
Cross-validation question to ask yourself: "If this student had never received intervention, what would the scores look like today?" If the honest answer is "probably much lower," that is part of the pattern.
📝 "Zoom Out" Checklist — Documenting a Well-Compensated Profile

Data to gather and document explicitly

  • Number of years and tiers of intervention received; initial entry-level data
  • Family history of dyslexia or reading difficulty (parent interview — ask specifically)
  • Phonological awareness and rapid naming data even when reading scores are average
  • Timed vs. untimed performance discrepancy — does accuracy hold up under time pressure?
  • CBM slope and benchmark trajectory from earliest available data through present
  • Classroom observation of actual reading behavior — strategy use, avoidance, effort, prosody
  • Student report of effort and reading experience
  • Teacher report of what the student looks like on independent, on-grade reading tasks

FIE language model for this scenario

Eligibility decision: Whether this profile results in an IDEA/ARD eligibility or a 504 referral depends on the second prong — does the disability require specially designed instruction now, or do accommodations alone provide curriculum access? That determination is data-driven, not automatic. See the Dyslexia IDEA vs. 504 Decision Guide ↗

Framework: Stephens & Schultz, C-SEP (2015/2024); Texas Dyslexia Handbook (2024); TEA TAA Letter on Dyslexia Services (2018)

🤠

TEA SLD Guidance Document (January 2025) — Major Themes

The TEA SLD Guidance Document (January 2025) uses the word "team" (MDT) 95 times (C. Vielma, TEDA 2026 session on TEA SLD Guidance). The emphasis on multi-disciplinary team involvement is not incidental — it is the foundation of defensible evaluation practice. Data integration is not a solo activity.

Key themes from the guidance document: Emphasis on MDT; limitations of norm-referenced testing (especially cognitive); learning data required, NRT optional; multiple sources of data; and impact statements. The guidance explicitly states that cognitive testing is conducted on a case-by-case basis — it is a tool for understanding underachievement, not a required gateway to eligibility.

Best practices for achievement assessment (TEA, 2025): Review existing data before administering new assessments. Focus time and energy on directly assessing areas of academic concern. Consider all data types. Collaborate with teachers and curriculum specialists when interpreting CBM and screener results. Include OT on the MDT when graphomotor deficits are suspected.

Sources: Stephens & Schultz, C-SEP (2015/2024); TEA Guidance for the Comprehensive Evaluation of SLD (January 2025); Vielma, C. (TEDA 2026, session on TEA SLD Guidance document)

← Dyscalculia SLD Clinical References Language Difference/Disorder →
Reference Note: Clinical guidance and interpretive summaries on this page are original synthesis prepared for professional reference by educational diagnosticians. Legal citations reference federal and state statute (public domain). Assessment descriptions are paraphrased from published professional literature. Eligibility determinations must be made by a qualified multidisciplinary ARD team in accordance with IDEA and Texas TAC §89.1040. Barber Sped Hub is an independent diagnostic reference and is not affiliated with or endorsed by any test publisher, researcher, or professional organization.