Measurement Error


Measurement error in education generally refers to either (1) the difference between what a test score indicates and a student’s actual knowledge and abilities or (2) errors that are introduced when collecting and calculating data-based reports, figures, and statistics related to schools and students.

Because some degree of measurement error is inevitable in testing and data reporting, education researchers, statisticians, data professionals, and test developers often publicly acknowledge that performance data, such as high school graduation rates or college-enrollment rates, are not perfectly reliable (they may even report the “margin of error” for a given statistic or finding) or that test scores don’t always accurately reflect what students know or can do—i.e., that there is no such thing as a perfectly reliable test of student knowledge and skill acquisition.

Measurement errors in testing may result from a wide variety of factors, such as a student’s mental and emotional state during the test period or the conditions under which the test was administered. For example, students may have been unusually tired, hungry, or emotionally distressed, or distractions such as loud noises, disruptive peers, or technical problems could have adversely affected test performance. Test scores for young children are often considered to be especially susceptible to measurement error, given that young children tend to have shorter attention spans and they may not be able to fully comprehend the importance of the test and take it seriously. In addition, young children of the same chronological age or grade level may be at very different stages of social, cognitive, and emotional development, and if a young child experiences a rapid developmental growth spurt, test results could quickly become outdated and therefore misrepresentative.

The following is a representative list of a few additional factors and problems that may give rise to measurement error in testing:

  • Ambiguously phrased questions or inaccurate answers.
  • Test items, questions, and problems may not address the material students were actually taught.
  • Performance levels and cutoff scores, such as those considered to be “passing” or “proficient” on a particular test, may be flawed, poorly calibrated, or misrepresentative.
  • The scoring process may be poorly designed, and both human scorers and computer-scoring systems may make mistakes.
  • Test administrators could give students incorrect directions, help students cheat, or fail to create calm and conducive test-taking conditions.
  • Test-result data may be inaccurately recorded and reported.

Measurement errors in the reporting of education data and statistics are common and, to a greater or lesser extent, both expected and unavoidable. While human error may lead to inaccurate reporting, data systems and processes are intrinsically limited—i.e., it is simply not possible to create perfect data systems or collect data flawlessly, particularly as systems grow in scale and scope. National or statewide data systems—e.g., systems administered by government agencies to track important educational data such as high school graduation rates—are especially prone to measurement error, given the massive complexities entailed in collecting data from thousands of schools on the performance of hundreds of thousands or millions of students. For this reason, most large-scale education data are openly qualified as estimates.

The following is a representative list of a few additional factors and problems that may give rise to measurement error in educational data:

  • Flawed, imprecise, or mismanaged data-collection processes resulting in incorrect reports, records, figures, and statistics.
  • An absence of clear and understandable rules, guidelines, and standards for data collection and reporting processes, or ambiguous guidelines that give rise to misinterpretation and error.
  • Small sample sizes—such as in rural schools that may have small student populations and few minority students—that may distort the perception of performance for certain time periods, graduating classes, or student groups.
  • Divergent data-collection and data-reporting processes—such as the unique data-collection systems and requirements developed by states—that can lead to misrepresentative comparisons or systems incompatibilities that produce errors.
  • High rates of transfer in and out of school systems—e.g., by the children of transient workers—that make it more difficult to accurately track the enrollment status of students.
  • Lack of adequate training, experience, or technical expertise in proper data-collection and -reporting procedures among those responsible for collecting and reporting data at the school, district, and state levels.
  • Intentional misrepresentations of student performance and enrollment, such as those that may accompany high-stakes testing.


While some degree of measurement error is—and perhaps always will be—unavoidable, many educators, schools, districts, government agencies, and test developers are taking steps to mitigate measurement error in both testing and data reporting.

In testing, measurement error is generally considered a relatively minor issue for low-stakes testing—i.e., when test results are not used to make important decisions about students, teachers, or schools. As the stakes attached to test performance rise, however, measurement error becomes a more serious issue, since test results may trigger a variety of consequences. Measurement error is one reason that many test developers and testing experts recommend against using a single test result to make important educational decisions. For example, the Standards for Educational and Psychological Testing—a set of proposed guidelines jointly developed by the American Educational Research Association, American Psychological Association, and the National Council on Measurement in Education—recommends that “in elementary or secondary education, a decision or characterization that will have a major impact on a test taker should not automatically be made on the basis of a single test score.”

The following are a few representative strategies that educators and test developers may employ to reduce measurement error in testing:

  • Test developers can carefully review questions for test bias and fairness, and remove or revise items that may adversely affect the performance of students of different races, cultural groups, or genders.
  • Test developers can conduct pilot tests to get feedback on difficulty levels, phrasing clarity, and bias, and then revise tests before they are administered.
  • To reduce errors in the human scoring of questions that cannot be scored by computer, such as open-response and essay questions, two or more scorers can score each item or essay. If they disagree, the item can be passed on to additional scorers.
  • Schools can tighten security practices to combat and prevent cheating by those administering and taking the tests.
  • Policy makers can lower or eliminate the consequences resulting from test results to minimize score inflation and reduce the motivation to manipulate results.
  • Instead of relying on one potentially inaccurate measure, schools can get more comprehensive information by using multiple methods to assess student achievement and learning growth.

In educational data collection and reporting, measurement error can also become a significant issue, particularly when school-funding levels, penalties, or the perception of performance are influenced by publicly reported data, such as dropout rates or graduation rates, for example. For these and other reasons, improving the quality and accuracy of data systems, collection processes, and reporting requirements has become a growing priority for schools, policy makers, and government agencies, and a variety of organizations and initiatives, such as the Data Quality Campaign and the Common Education Data Standards, are working to improve quality, consistency, and reliability of education data.

The following are a few representative strategies that educators and data experts may employ to reduce measurement error in data reporting:

  • “Unique student identifiers,” such as state-assigned codes or social-security numbers, can be used to facilitate the tracking of individual students and increase data reliability as they move from grade to grade or school to school.
  • Common data-collection and -reporting standards can be developed to improve the reliability of data and allow for performance comparisons across schools and states.
  • Redundant processes—multiple systems and people checking for errors—can be used to improve reporting accuracy.
  • Clearer guidelines and better training can be provided to those compiling and calculating data.
  • Improved technology and the use of compatible or interoperable systems can facilitate data quality and the exchange of data among different schools, organizations, and states.
Most PopularMost RecentMost SharedSynonymsAbbreviations