Archive for the ‘Entry’ Category



In education, the term assessment refers to the wide variety of methods or tools that educators use to evaluate, measure, and document the academic readiness, learning progress, skill acquisition, or educational needs of students.

While assessments are often equated with traditional tests—especially the standardized tests developed by testing companies and administered to large populations of students—educators use a diverse array of assessment tools and methods to measure everything from a four-year-old’s readiness for kindergarten to a twelfth-grade student’s comprehension of advanced physics. Just as academic lessons have different functions, assessments are typically designed to measure specific elements of learning—e.g., the level of knowledge a student already has about the concept or skill the teacher is planning to teach or the ability to comprehend and analyze different types of texts and readings. Assessments also are used to identify individual student weaknesses and strengths so that educators can provide specialized academic support, educational programming, or social services. In addition, assessments are developed by a wide array of groups and individuals, including teachers, district administrators, universities, private companies, state departments of education, and groups that include a combination of these individuals and institutions.

While assessment can take a wide variety of forms in education, the following descriptions provide a representative overview of a few major forms of educational assessment.

Assessments are used for a wide variety of purposes in schools and education systems:

  • High-stakes assessments are typically standardized tests used for the purposes of accountability—i.e., any attempt by federal, state, or local government agencies to ensure that students are enrolled in effective schools and being taught by effective teachers. In general, “high stakes” means that important decisions about students, teachers, schools, or districts are based on the scores students achieve on a high-stakes test, and either punishments (sanctions, penalties, reduced funding, negative publicity, not being promoted to the next grade, not being allowed to graduate) or accolades (awards, public celebration, positive publicity, bonuses, grade promotion, diplomas) result from those scores. For a more detailed discussion, see high-stakes test.
  • Pre-assessments are administered before students begin a lesson, unit, course, or academic program. Students are not necessarily expected to know most, or even any, of the material evaluated by pre-assessments—they are generally used to (1) establish a baseline against which educators measure learning progress over the duration of a program, course, or instructional period, or (2) determine general academic readiness for a course, program, grade level, or new academic program that student may be transferring into.
  • Formative assessments are in-process evaluations of student learning that are typically administered multiple times during a unit, course, or academic program. The general purpose of formative assessment is to give educators in-process feedback about what students are learning or not learning so that instructional approaches, teaching materials, and academic support can be modified accordingly. Formative assessments are usually not scored or graded, and they may take a variety of forms, from more formal quizzes and assignments to informal questioning techniques and in-class discussions with students.
  • Summative assessments are used to evaluate student learning at the conclusion of a specific instructional period—typically at the end of a unit, course, semester, program, or school year. Summative assessments are typically scored and graded tests, assignments, or projects that are used to determine whether students have learned what they were expected to learn during the defined instructional period.

    Formative assessments are commonly said to be for learning because educators use the results to modify and improve teaching techniques during an instructional period, while summative assessments are said to be of learning because they evaluate academic achievement at the conclusion of an instructional period. Or as assessment expert Paul Black put it, “When the cook tastes the soup, that’s formative assessment. When the customer tastes the soup, that’s summative assessment.”

  • Interim assessments are used to evaluate where students are in their learning progress and determine whether they are on track to performing well on future assessments, such as standardized tests, end-of-course exams, and other forms of “summative” assessment. Interim assessments are usually administered periodically during a course or school year (for example, every six or eight weeks) and separately from the process of instructing students (i.e., unlike formative assessments, which are integrated into the instructional process).
  • Placement assessments are used to “place” students into a course, course level, or academic program. For example, an assessment may be used to determine whether a student is ready for Algebra I or a higher-level algebra course, such as an honors-level course. For this reason, placement assessments are administered before a course or program begins, and the basic intent is to match students with appropriate learning experiences that address their distinct learning needs.
  • Screening assessments are used to determine whether students may need specialized assistance or services, or whether they are ready to begin a course, grade level, or academic program. Screening assessments may take a wide variety of forms in educational settings, and they may be developmental, physical, cognitive, or academic. A preschool screening test, for example, may be used to determine whether a young child is physically, emotionally, socially, and intellectually ready to begin preschool, while other screening tests may be used to evaluate health, potential learning disabilities, and other student attributes.

Assessments are also designed in a variety of ways for different purposes:

  • Standardized assessments are designed, administered, and scored in a standard, or consistent, manner. They often use a multiple-choice format, though some include open-ended, short-answer questions. Historically, standardized tests featured rows of ovals that students filled in with a number-two pencil, but increasingly the tests are computer-based. Standardized tests can be administered to large student populations of the same age or grade level in a state, region, or country, and results can be compared across individuals and groups of students. For a more detailed discussion, see standardized test.
  • Standards-referenced or standards-based assessments are designed to measure how well students have mastered the specific knowledge and skills described in local, state, or national learning standards. Standardized tests and high-stakes tests may or may not be based on specific learning standards, and individual schools and teachers may develop their own standards-referenced or standards-based assessments. For a more detailed discussion, see proficiency-based learning.
  • Common assessments are used in a school or district to ensure that all teachers are evaluating student performance in a more consistent, reliable, and effective manner. Common assessments are used to encourage greater consistency in teaching and assessment among teachers who are responsible for teaching the same content, e.g. within a grade level, department, or content area. They allow educators to compare performance results across multiple classrooms, courses, schools, and/or learning experiences (which is not possible when educators teach different material and individually develop their own distinct assessments). Common assessments share the same format and are administered in consistent ways—e.g., teachers give students the same instructions and the same amount of time to complete the assessment, or they use the same scoring guides to interpret results. Common assessments may be “formative” or “summative.” For more detailed discussions, see coherent curriculum and rubric.
  • Performance assessments typically require students to complete a complex task, such as a writing assignment, science experiment, speech, presentation, performance, or long-term project, for example. Educators will often use collaboratively developed common assessments, scoring guides, rubrics, and other methods to evaluate whether the work produced by students shows that they have learned what they were expected to learn. Performance assessments may also be called “authentic assessments,” since they are considered by some educators to be more accurate and meaningful evaluations of learning achievement than traditional tests. For more detailed discussions, see authentic learningdemonstration of learning, and exhibition.
  • Portfolio-based assessments are collections of academic work—for example, assignments, lab results, writing samples, speeches, student-created films, or art projects—that are compiled by students and assessed by teachers in consistent ways. Portfolio-based assessments are often used to evaluate a “body of knowledge”—i.e., the acquisition of diverse knowledge and skills over a period of time. Portfolio materials can be collected in physical or digital formats, and they are often evaluated to determine whether students have met required learning standards. For a more detailed discussion, see portfolio.

The purpose of an assessment generally drives the way it is designed, and there are many ways in which assessments can be used. A standardized assessment can be a high-stakes assessment, for example, but so can other forms of assessment that are not standardized tests. A portfolio of student work can be a used as both a “formative” and “summative” form of assessment. Teacher-created assessments, which may also be created by teams of teachers, are commonly used in a single course or grade level in a school, and these assessments are almost never “high-stakes.” Screening assessments may be produced by universities that have conducted research on a specific area of child development, such as the skills and attributes that a student should have when entering kindergarten to increase the likelihood that he or she will be successful, or the pattern of behaviors, strengths, and challenges that suggest a child has a particular learning disability. In short, assessments are usually created for highly specialized purposes.


While educational assessments and tests have been around since the days of the one-room schoolhouse, they have increasingly assumed a central role in efforts to improve the effectiveness of public schools and teaching. Standardized-test scores, for example, are arguably the dominant measure of educational achievement in the United States, and they are also the most commonly reported indicator of school, teacher, and school-system performance.

As schools become increasingly equipped with computers, tablets, and wireless internet access, a growing proportion of the assessments now administered in schools are either computer-based or online assessments—though paper-based tests and assessments are still common and widely used in schools. New technologies and software applications are also changing the nature and use of assessments in innumerable ways, given that digital-assessment systems typically offer an array of features that traditional paper-based tests and assignments cannot. For example, online-assessment systems may allow students to log in and take assessments during out-of-class time or they may make performance results available to students and teachers immediately after an assessment has been completed (historically, it might have taken hours, days, or weeks for teachers to review, score, and grade all assessments for a class). In addition, digital and online assessments typically include features, or “analytics,” that give educators more detailed information about student performance. For example, teachers may be able to see how long it took students to answer particular questions or how many times a student failed to answer a question correctly before getting the right answer. Many advocates of digital and online assessments tend to argue that such systems, if used properly, could help teachers “personalize” instruction—because many digital and online systems can provide far more detailed information about the academic performance of students, educators can use this information to modify educational programs, learning experiences, instructional approaches, and academic-support strategies in ways that address the distinct learning needs, interests, aspirations, or cultural backgrounds of individual students. In addition, many large-scale standardized tests are now administered online, though states typically allow students to take paper-based tests if computers are unavailable, if students prefer the paper-based option, or if students don’t have the technological skills and literacy required to perform well on an online assessment.

Given that assessments come in so many forms and serve so many diverse functions, a thorough discussion of the purpose and use of assessments could fill a lengthy book. The following descriptions, however, provide a brief, illustrative overview of a few of the major ways in which assessments—especially assessment results—are used in an attempt to improve schools and teaching:

  • System and school accountability: Assessments, particularly standardized tests, have played an increasingly central role in efforts to hold schools, districts, and state public-school systems “accountable” for improving the academic achievement of students. The most widely discussed and far-reaching example, the 2001 federal law commonly known as the No Child Left Behind Act, strengthened federal expectations from the 1990s and required each state develop learning standards to govern what teachers should teach and students should learn. Under No Child Left Behind, standards are required in every grade level and content area from kindergarten through high school. The law also requires that students be tested annually in grades 3-8 and at least once in grades 10-12 in reading and mathematics. Since the law’s passage, standardized tests have been developed and implemented to measure how well students were meeting the standards, and scores have been reported publicly by state departments of education. The law also required that test results be tracked and reported separately for different “subgroups” of students, such as minority students, students from low-income households, students with special needs, and students with limited proficiency in English. By publicly reporting the test scores achieved by different schools and student groups, and by tying those scores to penalties and funding, the law has aimed to close achievement gaps and improve schools that were deemed to be underperforming. While the No Child Left Behind Act is one of the most controversial and contentious educational policies in recent history, and the technicalities of the legislation are highly complex, it is one example of how assessment results are being used as an accountability measure.
  • Teacher evaluation and compensation: In recent years, a growing number of elected officials, policy makers, and education reformers have argued that the best way to improve educational results is to ensure that students have effective teachers, and that one way to ensure effective teaching is to evaluate and compensate educators, at least in part, based on the test scores their students achieve. By basing a teacher’s income and job security on assessment results, the reasoning goes, administrators can identify and reward high-performing teachers or take steps to either help low-performing teachers improve or remove them from schools. Growing political pressure, coupled with the promise of federal grants, prompted many states to begin using student test results in teacher evaluations. This controversial and highly contentious reform strategy generally requires fairly complicated statistical techniques—known as value-added measures or growth measures—to determine how much of a positive or negative effect individual teachers have on the academic achievement of their students, based primarily on student assessment results.
  • Instructional improvement: Assessment results are often used as a mechanism for improving instructional quality and student achievement. Because assessments are designed to measure the acquisition of specific knowledge or skills, the design of an assessment can determine or influence what gets taught in the classroom (“teaching to the test” is a common, and often derogatory, phrase used to describe this general phenomenon). Formative assessments, for example, give teachers in-process feedback on student learning, which can help them make instructional adjustments during the teaching process, instead of having to wait until the end of a unit or course to find out how well students are learning the material. Other forms of assessment, such as standards-based assessments or common assessments, encourage educators to teach similar material and evaluate student performance in more consistent, reliable, or comparable ways.
  • Learning-needs identification: Educators use a wide range of assessments and assessment methods to identify specific student learning needs, diagnose learning disabilities (such as autism, dyslexia, or nonverbal learning disabilities), evaluate language ability, or determine eligibility for specialized educational services. In recent years, the early identification of specialized learning needs and disabilities, and the proactive provision of educational support services to students, has been a major focus of numerous educational reform strategies. For a related discussion, see academic support.


In education, there is widespread agreement that assessment is an integral part of any effective educational system or program. Educators, parents, elected officials, policy makers, employers, and the public all want to know whether students are learning successfully and progressing academically in school. The debates—many of which are a complex, wide ranging, and frequently contentious—typically center on how assessments are used, including how frequently they are being administered and whether assessments are beneficial or harmful to students and the teaching process. While a comprehensive discussion of these debates is beyond the scope of this resource, the following is a representative selection of a few major issues being debated:

  • Is high-stakes testing, as an accountability measure, the best way to improve schools, teaching quality, and student achievement? Or do the potential consequences—such as teachers focusing mainly on test preparation and a narrow range of knowledge at the expense of other important skills, or increased incentives to cheat and manipulate test results—undermine the benefits of using test scores as a way to hold schools and educators more accountable and improve educational results?
  • Are standardized assessments truly objective measures of academic achievement? Or do they reflect intrinsic biases—in their design or content—that favor some students over others, such wealthier white students from more-educated households over minority and low-income students from less-educated households? For more detailed discussions, see measurement error and test bias.
  • Are “one-size-fits-all” standardized tests a fair way to evaluate the learning achievement of all students, given that some students may be better test-takers than others? Or should students be given a variety of assessment options and multiple opportunities to demonstrate what they have learned?
  • Will more challenging and rigorous assessments lead to higher educational achievement for all students? Or will they end up penalizing certain students who come from disadvantaged backgrounds? And, conversely, will less-advantaged students be at an even greater disadvantage if they are not held to the same high educational standards as other students (because lowering educational standards for certain students, such as students of color, will only further disadvantage them and perpetuate the same cycle of low expectations that historically contributed to racial and socioeconomic achievement gaps)?
  • Do the costs—in money, time, and human resources—outweigh the benefits of widespread, large-scale testing? Would the funding and resources invested in testing and accountability be better spent on higher-quality educational materials, more training and support for teachers, and other resources that might improve schools and teaching more effectively? And is the pervasive use of tests providing valuable information that educators can use to improve instructional quality and student learning? Or are the tests actually taking up time that might be better spent on teaching students more knowledge and skills?
  • Are technological learning applications, including digital and online assessments, improving learning experiences for students, teaching them technological skills and literacy, or generally making learning experiences more interesting and engaging? Or are digital learning applications adding to the cost of education, introducing unwanted distractions in schools, or undermining the value of teachers and the teaching process?

Critical Friend


A critical friend is typically a colleague or other educational professional, such as a school coach, who is committed to helping an educator or school improve. A critical friend is someone who is encouraging and supportive, but who also provides honest and often candid feedback that may be uncomfortable or difficult to hear. In short, a critical friend is someone who agrees to speak truthfully, but constructively, about weaknesses, problems, and emotionally charged issues.


In education, the term critical friend was introduced in 1994 by the Annenberg Institute for School Reform, which began advocating a teacher-led approach to professional development called critical friends groups or professional learning communities—groups of educators who meet regularly, engage in structured professional discussions, and work collaboratively to improve their school or teaching skills. (It should be noted that some educators may not consider critical friends groups and professional learning communities to be strictly synonymous, and they may define both the terms and purpose of the strategies differently.) The National School Reform Faculty is widely considered to have popularized the term critical friends group.

The term critical friend, however, is also used more broadly outside of professional learning groups. The role of a critical friend is, generally speaking, based on the recognition that both professional and organizational improvement can be impeded when people and groups avoid facing hard truths, emotionally difficult subjects, and frank assessments of their own performance. At the same time, the critical-friend role is also based on the recognition that people will tend to continue avoiding hard truths, emotional subjects, and frank assessments of performance if these issues are not handled constructively, supportively, and professionally. For these reasons, critical friends—whether they are colleagues in a school or outside professionals—are believed to play a valuable role in helping educators improve their school or their teaching.

For a related discussion, see school coach.



The term one-to-one is applied to programs that provide all students in a school, district, or state with their own laptop, netbook, tablet computer, or other mobile-computing device. One-to-one refers to one computer for every student.


Given that computers, technology, and the internet are rapidly redefining nearly every area of modern life—from education to communications to careers—one-to-one programs are generally motivated by the following rationales:

  • Today’s students need consistent, at-the-ready access to computing devices throughout the day and, ideally, at home.
  • Teachers can only take full advantage of new learning technologies and online educational resources when all students are equipped with a computing device.
  • Teaching technological literacy and computing skills needs to be a priority in today’s schools.
  • Equipping all students with computing devices and incorporating technology into every course is the surest way to take full advantage of new learning technologies and produce students who are technologically skilled and literate.

Most of today’s schools have some form of computing technology available to teachers and students—such as computer labs (classrooms with computer workstations) or mobile computer stations (typically carts filled with laptop computers that can be wheeled around a school and shared by teachers and students)—but one-to-one computing environments are seen by many educators and reformers as the next logical step for schools. In schools without a one-to-one computing program, teachers may need to schedule computing time in advance, and—depending on a school’s computing options and computer supply—scheduling conflicts can arise. Teachers may also need to postpone or modify certain lessons, and valuable instructional time can be eroded because students may need to be moved to a computer lab, it may take extra time to get shared computers configured properly, or the computers may not have the required software, for example.

In addition to avoiding many logistical issues associated with more limited or restrictive computing options, one-to-one programs may give teachers greater flexibility in how they can use computers as instructional resources. For example, one-to-one programs:

  • Allow all students to work online simultaneously in a class or to work collaboratively on a project that is hosted in the cloud.
  • Allow teachers to use interactive, technology-assisted teaching strategies that require students to have a computing device. For example, teachers can pose questions to a class, and all students can respond using an online survey system. Instead of asking a question and picking one student to give an answer, teachers can get answers from all students in real time to see who has understood the material, who hasn’t, and who made need extra help.
  • Make it easier for students to save work on their own computer or for teachers to load specialized software programs on every computer used by students in a particular class.
  • Allow teachers to use “course-management software” to organize a class or assign long-term projects or homework that require students to use a computer. Otherwise, if some students do not have computers at home, teachers would have to assign homework that does not require computers, or they would have to modify expectations for students without access to a computer.
  • Make it easier to find cheaper or more up-to-date learning materials for students (for example, textbooks can be expensive and can quickly become outdated) and to diversify the types of learning tools, materials, and readings teachers make available to students, such as interactive e-textbooks, digital simulations, self-paced online tests, video-editing applications, or multimedia software, for example.
  • Make it easier—or possible—to use new or more innovative teaching strategies such as blended learning and “flipped classrooms” or to incorporate online courses into the learning options schools make available to students.


One-to-one computing is frequently the subject of debate—most commonly because one-to-one programs cost significantly more than alternative options in which students and teachers share a smaller number of computers.

In addition to the potential benefits described above, the following are representative examples of the kinds of arguments that may be made by advocates of one-to-one programs:

  • One-to-one programs are a long-term investment. While the up-front costs may be significant, the long-term benefits outweigh the costs.
  • The computers allow teachers and students to work more efficiently, more effectively, or in more innovative ways. Advocates may also argue that technology can increase student motivation, engagement, and interest in learning, and that students will be able to learn more and learn in more exciting ways.
  • One-to-one programs provide more equitable access to technology. Students from lower-income families may have little or no access to computers, which places them at an educational disadvantage when it comes to acquiring technological skills and literacy.
  • Increasingly, more and more learning materials are being converted to or produced in digital formats, often at a cheaper cost, including a growing number of free and open-source educational resources. If teachers and students do not have computers, they won’t be able to take full advantage of these new learning tools and materials.
  • More standardized tests are being administered online. If students are not confident using computers or fluent in their use, students in schools without one-to-one programs will be disadvantaged when taking standardized tests, which can have consequences for both the students and the schools. For related discussions, see computer-adaptive test, test accommodations, and test bias.

The following are representative examples of the kinds of arguments that may be made by critics or skeptics of one-to-one programs:

  • The cost of purchasing and maintaining the devices is too expensive. In addition to the up-front costs entailed in purchasing devices, the long-term maintenance costs—from technical-support specialists to device repairs to software and network upgrades—can be significant.
  • Inadequate technical support can lead to myriad problems. If a sufficient number of computers are broken or malfunctioning, it can disrupt, delay, or derail classroom lessons and student projects. A poorly supported one-to-one program could become a major source of irritation and frustration in a school.
  • Students are not responsible enough to be given such devices. Portable devices are likely to be dropped or broken, especially by younger children, and students will use the computers in unsupervised settings, which can lead to dangerous or harmful online behaviors, from visiting social media sites to viewing inappropriate material to engaging in cyberbullying.
  • The computers may not be used effectively, or they may not produce the desired results or benefits. For example, the computers may end up being used as expensive word processors, not as the transformative learning tools they were advertised to be. If teachers do not embrace the new technology, if they are not provided with adequate training, or if they use computers to teach in the same traditional ways, then one-to-one programs are unlikely to produce the desired benefits to or changes in teaching methods.
  • The computers will erode instructional time. Teachers may need to spend more time managing online behaviors and distractions, while technical glitches, broken machines, and other problems can eat up valuable classroom time.

In addition to the points above, another potential source of debate is whether or not students should be allowed to take one-to-one devices home. Since transporting mobile devices back and forth from school every day increases the likelihood that devices will be broken, and given that students who take home devices will be using the computers outside of secure in-school networks or adult-supervised settings—which increases the potential that students may engage in harmful or irresponsible online behaviors—one-to-one take-home policies are frequently debated or criticized.



In education, the term voice refers the values, opinions, beliefs, perspectives, and cultural backgrounds of the people in a district, school, or school community—especially students, teachers, parents, and local citizens—as well as the degree to which those values, opinions, beliefs, and perspectives are considered, included, listened to, and acted upon when important decisions are being made in a district or school. The most common variations are student voice, teacher voice, and parent voice.

It should be noted that while the concept of voice is often presented in the singular and applied to diverse groups, such as teachers or parents, these groups rarely represent a unified body of values, opinions, beliefs, perspectives, and cultural backgrounds—it may be more accurate to say that “voices” are being represented, listened to, and acted upon. That said, the concept of voice—as both a philosophy and reform strategy—is usually sensitive to, inclusive of, and predicated on diversity, including individual, racial, socioeconomic, and cultural diversity.

While the inclusion of voice may take a wide variety of forms in schools, there are a few main types of voice:

  • Formal: When voice is formalized or institutionalized, school governance and organizational systems may be reconfigured to include teacher, student, and parent voices in leadership roles or major operational and educational decisions. A few common examples include parent-teacher associations (or parent-teacher-student associations), student councils, leadership teams, and student, teacher, or parent representatives who are elected to school boards or sit on official school committees. For a related discussion, see shared leadership.
  • Informal: When voice is informal, school leaders may take the opinions of students, teachers, and parents under advisement, but there is usually no formal obligation to act on their opinions or to include them in official leadership roles and decisions. A few common examples include administrative “open-door” policies, open-invitation community forums, and surveys of students, parents, and teachers.
  • Instructional: Educators also use the concept of voice in reference to the instruction of students. In these cases, teachers may give students a “voice” in the instructional process by modifying what and how they teach so that students can pursue personal interests or career aspirations. For example, students may be able to write an essay or create a short video documentary, depending on which mode of expression they prefer, or they may be able to research a topic from the standpoint of their familial or cultural background, among other options. For more detailed discussions, see differentiation, learning pathways, and personalized learning.
  • Cultural: Educators also use the concept of voice when discussing the presentation of academic material or the perspectives reflected in a text or other learning resource. In these cases, voice may refer to the cultural, racial, or political perspectives that are either present or absent in educational resources such textbooks or tests. Because many historical texts used in schools are written from a Eurocentric standpoint, for example, teachers may choose to present the perspectives and historical accomplishments of prominent women and people of color. Similarly, English teachers may choose works of literature from outside the Western literary canon so that students are learning from authors who are not exclusively white and male, or so that the texts speak to the cultural backgrounds of the minority or immigrant students in a class, for example. For related discussions, see multicultural education and test bias.
  • Evaluative: Student and parent voice may also be considered in the evaluation of teachers, school leaders, and schools. For example, students may be surveyed about the effectiveness of their teachers, and the survey results could be factored in to job-performance evaluations. Some districts may have “parent councils” that advise school leaders and function similarly to school boards, and students and parents may also be involved in the selection and hiring process of new teachers and administrators.


As both a philosophical stance and a school-improvement strategy, the concept of voice in education has grown increasingly popular in recent decades. Generally speaking, voice can be seen as an alternative to more hierarchical forms of governance in which school administrators may make unilateral, executive decisions with little or no input from students, teachers, and parents. Voice is also predicated on the belief or recognition that a school will be more successful—e.g., that teachers will be more effective and professionally fulfilled, that students will learn and achieve more, and that parents will feel more confidence in the school and more involved in their child’s education—if school leaders both consider and act upon the values, opinions, beliefs, and perspectives of the people in a school and community. The common phrases “honoring student voice” or “honoring teacher voice” generally refer to this conviction or to the process of including various “voices.” While the degree to which voice is both solicited and valued can vary considerably from school to school, educators are increasingly embracing the concept in both leadership and instructional decisions.

The following descriptions provide a brief overview of a few representative ways in which voice might intersect with efforts to improve schools:

  • Student voice: Historically, student councils and other forms of student-led government were the most common channels for students to share their opinions and viewpoints, but many of these opportunities did not allow students to make authentic contributions to the leadership of a school. Increasingly, more school districts now have voting or nonvoting student seats on the school board, and some states even elect student representatives to the state board of education. Students may also be asked to serve on a formal committee, such as a school-improvement committee, or participate in the hiring of a new superintendent, principal, or teacher. In addition to taking on leadership roles in a school, student voice is playing a larger role in instructional decisions. Students may be involved in selecting education materials, or they may be given more choices over learning content, products, and processes in the classroom (which educators consider to be a form of student voice). In addition, students may write stories for their school or community newspapers, and they may blog about their opinions about and experiences in school.
  • Teacher voice: In public schools, it is now more common for teachers to play a role in school-leadership decisions, and administrators are more likely to solicit and act upon teacher concerns and viewpoints than in the past. Historically, teacher unions and academic departments, which typically have chairpersons with defined leadership responsibilities, have been the most common channels through which teachers participated in school governance. In recent years, however, the role of teachers in leadership and instructional decisions has expanded and diversified, and alternative governance strategies, such as shared leadership and leadership teams, are becoming more common in schools throughout the United States. Teachers are also playing a more active role in instructional decisions, including the design of school curricula and assessments, and in the selection of academic texts, learning technologies, and other educational resources. More recently, teachers have become increasingly active in voicing their concerns about teacher-performance evaluations, including the criteria used to define effective teachers and determine whether their pay scales should be based in part on student performance (for related discussions, see high-stakes test and value-added measures). Teachers may also be involved in selecting the types of professional development and training offered by a school or district, including teacher-led forms of professional development such as professional learning communities. And, of course, teachers may also share their opinions with a larger audience by serving on committees at the district, state, or national levels; by writing books, blogs, or newspaper editorials; or by taking on a leadership role in a union or professional association, such as a membership organization for teachers in a specific subject area.
  • Parent voice: Historically, parent involvement in school leadership was fairly limited, consisting largely of traditional parent-teacher associations that, for example, raised money for school programs or organized school volunteers (among many other possible roles and responsibilities). In recent years, however, parents are increasingly being asked, or they are requesting, to serve on formal school committees and leadership teams, or to provide their opinions and feedback on a wide variety of issues and programs. At the elementary level, parent volunteerism in schools is quite common, although volunteerism rates tend to decline as their children age. Given their personal and emotional investment in the success of a school their child attends, parents, guardians, and family members may be more likely to run for seats on the district school board or seek local elected office. And with the advent of the online organizing and advocacy tools, and a concurrent increase in citizen journalism and activism, parents are also forming their own organizations to advocate for or fight against particular issues, such as bullying, special-needs education, or school funding, for example. In addition, parent involvement in school activities is considered particularly important for students more likely to struggle in school, such as students from lower-income or less-educated households, recently arrived immigrant or refugee students, or students with physical or learning disabilities, for example.

Measurement Error


Measurement error in education generally refers to either (1) the difference between what a test score indicates and a student’s actual knowledge and abilities or (2) errors that are introduced when collecting and calculating data-based reports, figures, and statistics related to schools and students.

Because some degree of measurement error is inevitable in testing and data reporting, education researchers, statisticians, data professionals, and test developers often publicly acknowledge that performance data, such as high school graduation rates or college-enrollment rates, are not perfectly reliable (they may even report the “margin of error” for a given statistic or finding) or that test scores don’t always accurately reflect what students know or can do—i.e., that there is no such thing as a perfectly reliable test of student knowledge and skill acquisition.

Measurement errors in testing may result from a wide variety of factors, such as a student’s mental and emotional state during the test period or the conditions under which the test was administered. For example, students may have been unusually tired, hungry, or emotionally distressed, or distractions such as loud noises, disruptive peers, or technical problems could have adversely affected test performance. Test scores for young children are often considered to be especially susceptible to measurement error, given that young children tend to have shorter attention spans and they may not be able to fully comprehend the importance of the test and take it seriously. In addition, young children of the same chronological age or grade level may be at very different stages of social, cognitive, and emotional development, and if a young child experiences a rapid developmental growth spurt, test results could quickly become outdated and therefore misrepresentative.

The following is a representative list of a few additional factors and problems that may give rise to measurement error in testing:

  • Ambiguously phrased questions or inaccurate answers.
  • Test items, questions, and problems may not address the material students were actually taught.
  • Performance levels and cutoff scores, such as those considered to be “passing” or “proficient” on a particular test, may be flawed, poorly calibrated, or misrepresentative.
  • The scoring process may be poorly designed, and both human scorers and computer-scoring systems may make mistakes.
  • Test administrators could give students incorrect directions, help students cheat, or fail to create calm and conducive test-taking conditions.
  • Test-result data may be inaccurately recorded and reported.

Measurement errors in the reporting of education data and statistics are common and, to a greater or lesser extent, both expected and unavoidable. While human error may lead to inaccurate reporting, data systems and processes are intrinsically limited—i.e., it is simply not possible to create perfect data systems or collect data flawlessly, particularly as systems grow in scale and scope. National or statewide data systems—e.g., systems administered by government agencies to track important educational data such as high school graduation rates—are especially prone to measurement error, given the massive complexities entailed in collecting data from thousands of schools on the performance of hundreds of thousands or millions of students. For this reason, most large-scale education data are openly qualified as estimates.

The following is a representative list of a few additional factors and problems that may give rise to measurement error in educational data:

  • Flawed, imprecise, or mismanaged data-collection processes resulting in incorrect reports, records, figures, and statistics.
  • An absence of clear and understandable rules, guidelines, and standards for data collection and reporting processes, or ambiguous guidelines that give rise to misinterpretation and error.
  • Small sample sizes—such as in rural schools that may have small student populations and few minority students—that may distort the perception of performance for certain time periods, graduating classes, or student groups.
  • Divergent data-collection and data-reporting processes—such as the unique data-collection systems and requirements developed by states—that can lead to misrepresentative comparisons or systems incompatibilities that produce errors.
  • High rates of transfer in and out of school systems—e.g., by the children of transient workers—that make it more difficult to accurately track the enrollment status of students.
  • Lack of adequate training, experience, or technical expertise in proper data-collection and -reporting procedures among those responsible for collecting and reporting data at the school, district, and state levels.
  • Intentional misrepresentations of student performance and enrollment, such as those that may accompany high-stakes testing.


While some degree of measurement error is—and perhaps always will be—unavoidable, many educators, schools, districts, government agencies, and test developers are taking steps to mitigate measurement error in both testing and data reporting.

In testing, measurement error is generally considered a relatively minor issue for low-stakes testing—i.e., when test results are not used to make important decisions about students, teachers, or schools. As the stakes attached to test performance rise, however, measurement error becomes a more serious issue, since test results may trigger a variety of consequences. Measurement error is one reason that many test developers and testing experts recommend against using a single test result to make important educational decisions. For example, the Standards for Educational and Psychological Testing—a set of proposed guidelines jointly developed by the American Educational Research Association, American Psychological Association, and the National Council on Measurement in Education—recommends that “in elementary or secondary education, a decision or characterization that will have a major impact on a test taker should not automatically be made on the basis of a single test score.”

The following are a few representative strategies that educators and test developers may employ to reduce measurement error in testing:

  • Test developers can carefully review questions for test bias and fairness, and remove or revise items that may adversely affect the performance of students of different races, cultural groups, or genders.
  • Test developers can conduct pilot tests to get feedback on difficulty levels, phrasing clarity, and bias, and then revise tests before they are administered.
  • To reduce errors in the human scoring of questions that cannot be scored by computer, such as open-response and essay questions, two or more scorers can score each item or essay. If they disagree, the item can be passed on to additional scorers.
  • Schools can tighten security practices to combat and prevent cheating by those administering and taking the tests.
  • Policy makers can lower or eliminate the consequences resulting from test results to minimize score inflation and reduce the motivation to manipulate results.
  • Instead of relying on one potentially inaccurate measure, schools can get more comprehensive information by using multiple methods to assess student achievement and learning growth.

In educational data collection and reporting, measurement error can also become a significant issue, particularly when school-funding levels, penalties, or the perception of performance are influenced by publicly reported data, such as dropout rates or graduation rates, for example. For these and other reasons, improving the quality and accuracy of data systems, collection processes, and reporting requirements has become a growing priority for schools, policy makers, and government agencies, and a variety of organizations and initiatives, such as the Data Quality Campaign and the Common Education Data Standards, are working to improve quality, consistency, and reliability of education data.

The following are a few representative strategies that educators and data experts may employ to reduce measurement error in data reporting:

  • “Unique student identifiers,” such as state-assigned codes or social-security numbers, can be used to facilitate the tracking of individual students and increase data reliability as they move from grade to grade or school to school.
  • Common data-collection and -reporting standards can be developed to improve the reliability of data and allow for performance comparisons across schools and states.
  • Redundant processes—multiple systems and people checking for errors—can be used to improve reporting accuracy.
  • Clearer guidelines and better training can be provided to those compiling and calculating data.
  • Improved technology and the use of compatible or interoperable systems can facilitate data quality and the exchange of data among different schools, organizations, and states.

Bloom’s Taxonomy


Bloom’s taxonomy is a classification system used to define and distinguish different levels of human cognition—i.e., thinking, learning, and understanding. Educators have typically used Bloom’s taxonomy to inform or guide the development of assessments (tests and other evaluations of student learning), curriculum (units, lessons, projects, and other learning activities), and instructional methods such as questioning strategies.

Original Taxonomy

Bloom’s taxonomy was originally published in 1956 by a team of cognitive psychologists at the University of Chicago. It is named after the committee’s chairman, Benjamin Bloom (1913–1999). The original taxonomy was organized into three domains: Cognitive, Affective, and Psychomotor. Educators have primarily focused on the Cognitive model, which includes six different classification levels: Knowledge, Comprehension, Application, Analysis, Synthesis, and Evaluation. The group sought to design a logical framework for teaching and learning goals that would help researchers and educators understand the fundamental ways in which people acquire and develop new knowledge, skills, and understandings. Their initial intention was to help academics avoid duplicative or redundant efforts in developing different tests to measure the same educational objectives. The system was originally published under the title Taxonomy of Educational Objectives: The Classification of Educational Goals, Handbook 1: Cognitive Domain.

Some users of the taxonomy place more emphasis on the hierarchical nature of the framework, asserting that the first three elements—Knowledge, Comprehension, and Application—represent lower levels of cognition and learning, while Analysis, Synthesis, and Evaluation are considered higher-order skills. For this reason, the taxonomy is often graphically represented as a pyramid with higher-order cognition at the top.

While Bloom’s taxonomy initially received little fanfare, it gradually grew in popularity and attracted further study. The system remains widely taught in undergraduate and graduate education programs throughout the United States, and it has also been translated into multiple languages and used around the world.

Revised Taxonomy

In 2001, another team of scholars—led by Lorin Anderson, a former student of Bloom’s, and David Krathwohl, a Bloom colleague who served on the academic team that developed the original taxonomy—released a revised version of Bloom’s taxonomy called A Taxonomy for Learning, Teaching, and Assessing: A Revision of Bloom’s Taxonomy of Educational Objectives. The “Revised Bloom’s Taxonomy,” as it is commonly called, was intentionally designed to be more useful to educators and to reflect the common ways in which it had come to be used in schools.

In the revised version, three categories were renamed and all the categories were expressed as verbs rather than nouns. Knowledge was changed to Remembering, Comprehension became Understanding, and Synthesis was renamed Creating. In addition, Creating became the highest level in the classification system, switching places with Evaluating. The revised version is now Remembering, Understanding, Applying, Analyzing, Evaluating, and Creating, in that order.


Critics of the original taxonomy have questioned whether human cognition can be divided into distinct categories, particularly sequential or hierarchical categories. Others embrace the utility of the classification system, while still recognizing that it does not—and cannot—represent human thought or learning in all their complexity and sophistication. Most criticism is focused less on the system itself and more on the ways in which educators interpret and use the taxonomy. For example, teachers may view the system as linear prescription, believing that students must first begin with remembering, move on to understanding, and proceed through the levels to creating. Other educators may place too much emphasis on the importance higher-order thinking—at the expense of lower-order skills—despite the fact that acquiring a strong foundation of knowledge, information, and facts is essential in the application of higher-level thinking skills. Some educators have even proposed an alternative formulation, suggesting that the taxonomy should be reversed because higher-level thinking skills require that students both remember and understand underlying concepts first. Others suggest that the taxonomy should be interpreted as a non-hierarchical continuum in which no one form of cognition is more or less important.

While still widely used, Bloom’s taxonomy is gradually being supplemented—and may perhaps even supplanted one day—by new insights into the workings of human thought and learning made possible by advances in brain imaging and cognitive science. Still, it is likely, given its logical simplicity and utility, that Bloom’s taxonomy will continue to be widely used by educators. For a related discussion, see brain-based learning.



The term rigor is widely used by educators to describe instruction, schoolwork, learning experiences, and educational expectations that are academically, intellectually, and personally challenging. Rigorous learning experiences, for example, help students understand knowledge and concepts that are complex, ambiguous, or contentious, and they help students acquire skills that can be applied in a variety of educational, career, and civic contexts throughout their lives.

While dictionaries define the term as rigid, inflexible, or unyielding, educators frequently apply rigor or rigorous to assignments that encourage students to think critically, creatively, and more flexibly. Likewise, they may use the term rigorous to describe learning environments that are not intended to be harsh, rigid, or overly prescriptive, but that are stimulating, engaging, and supportive.

In education, rigor is commonly applied to lessons that encourage students to question their assumptions and think deeply, rather than to lessons that merely demand memorization and information recall. For example, a fill-in-the-blank worksheet or multiple-choice test would not be considered rigorous by many educators. Although courses such as AP United States History are widely seen as rigorous because of the comparatively demanding workload or because the course culminates in a difficult test, a more expansive view of rigor would also encompass academic relevance and critical-thinking skills such as interpreting and analyzing historical data, making connections between historical periods and current events, using both primary and secondary sources to support an argument or position, and arriving at a novel interpretation of a historical event after conducting extensive research on the topic.

While some educators may equate rigor with difficultly, many educators would argue that academically rigorous learning experiences should be sufficiently and appropriately challenging for individual students or groups of students, not simply difficult. Advocates contend that appropriately rigorous learning experiences motivate students to learn more and learn it more deeply, while also giving them a sense of personal accomplishment when they overcome a learning challenge—whereas lessons that are simply “hard” will more likely lead to disengagement, frustration, and discouragement.

One common way in which educators do use rigor to mean unyielding or rigid is when they are referring to “rigorous” learning standards and high expectations—i.e., when they are calling for all students to be held to the same challenging academic standards and expectations. In this sense, rigor may be applied to educational situations in which students are not allowed to “coast” or “slide by” because standards, requirements, or expectations are low. In these cases, rigor is connected to the concept of educational equity, the belief that all students—regardless of their race, ethnicity, gender, socioeconomic status, English proficiency, or disability—should pursue a challenging course of study that will prepare them for success in later life. For example, students of color, on average, tend to be disproportionately represented in lower-level classes with lower academic expectations (and possibly lower-quality teaching), which can give rise to achievement gaps or “cycles of low expectation” in which stereotypes about the academic performance of minorities are reinforced and perpetuated because minority students are held to lower academic standards or taught less than their peers (for a related discussion, see stereotype threat). Enrolling students of color in “rigorous” academic programs that hold them to high academic standards is one way that educators may attempt to close achievement gaps and disrupt the self-perpetuating nature of low expectations.



The term at-risk is often used to describe students or groups of students who are considered to have a higher probability of failing academically or dropping out of school. The term may be applied to students who face circumstances that could jeopardize their ability to complete school, such as homelessness, incarceration, teenage pregnancy, serious health issues, domestic violence, transiency (as in the case of migrant-worker families), or other conditions, or it may refer to learning disabilities, low test scores, disciplinary problems, grade retentions, or other learning-related factors that could adversely affect the educational performance and attainment of some students. While educators often use the term at-risk to refer to general populations or categories of students, they may also apply the term to individual students who have raised concerns—based on specific behaviors observed over time—that indicate they are more likely to fail or drop out.

When the term is used in educational contexts without qualification, specific examples, or additional explanation, it may be difficult to determine precisely what “at-risk” is referring to. In fact, “at-risk” can encompass so many possible characteristics and conditions that the term, if left undefined, could be rendered effectively meaningless. Yet in certain technical, academic, and policy contexts—such as when federal or state agencies delineate “at-risk categories” to determine which students will receive specialized educational services, for example—the term is usually used in a precise and clearly defined manner. For example, states, districts, research studies, and organizations may create at-risk definitions that can encompass a broad range of specific student characteristics, such as the following:

  • Physical disabilities and learning disabilities
  • Prolonged or persistent health issues
  • Habitual truancy, incarceration history, or adjudicated delinquency
  • Family welfare or marital status
  • Parental educational attainment, income levels, employment status, or immigration status
  • Households in which the primary language spoken is not English

In most cases, “risk factors” are situational rather than innate. With the exception of certain characteristics such as learning disabilities, a student’s perceived risk status is rarely related to his or her ability to learn or succeed academically, and largely or entirely related to a student’s life circumstances. For example, attending a low-performing school could be considered a risk factor. If a school is underfunded and cannot provide essential services, or if its teaching quality and performance record are poor, the school could conceivably contribute to higher rates of student absenteeism, course failures, and attrition.


Generally speaking, the behaviors and characteristics associated with being an “at-risk student” are, in most cases, based on research and observable patterns in student demographics and school performance. Numerous academic studies have demonstrated correlations between certain risk factors and a student’s likelihood of succeeding academically, graduating from high school, or pursuing postsecondary education. Such correlations have given rise to a variety of reform strategies aimed at identifying student risk factors and then intervening with assistance and support intended to help “at-risk” students succeed academically and complete school.

In terms of general education-reform trends, schools are increasingly taking a proactive approach to at-risk students (early identification of risk factors followed by support), rather than a passive or reactive approach (allowing students to drop out, fall behind their peers academically, or fail courses before intervening). The basic rationale motivating these reforms is that schools can help at-risk students by increasing exposure to “success factors”—such as the personal attention and guidance of an adult, for example—and mitigating any risk factors that are within their control, such as reducing expulsions and grade retentions, which can increase the chances that a student will drop out.


In addition to being imprecise, some educators dislike the term at-risk because they believe it can give rise to overgeneralizations that may stigmatize students, particularly when the term is applied to large, diverse groups such as minorities or students from lower-income households. They may also fear that such labels may perpetuate the very kinds of societal perceptions, generalizations, and stereotypes that contribute to students being at greater risk of failure or of dropping out in the first place. If minorities or students from lower-income households are consistently labeled “at-risk,” for example, schools and educators may respond by treating them in ways that could inadvertently perpetuate their at-risk status. For example, schools may enroll non-English-speaking students in specialized programs that separate them from their English-speaking peers. While the intention in this case is to provide the specialized language instruction that the students need, the program may also give rise to feelings of cultural isolation, or it may lower academic expectations so that participating students fall further and further behind their peers academically. Consequently, these students may drop out because they don’t feel connected to the larger school culture or see the value of education, or they may lose hope that they will ever catch up or graduate (for a more detailed discussion of this specific example, see dual-language education). Research on stereotype threat and the Pygmalion effect has provided some evidence to support these general claims.

Many educators and researchers have also noted that different individuals within the same demographic or risk categories may have very different innate abilities, familial resources, support systems, or other personal or situational characteristics that can lead them to be more resilient or successful than others; consequently, these students would be less “at-risk” than many of their peers. In this view, at-risk is an overly broad label that inevitably fails to take into account the true complexity of any particular student’s situation. The concern is that, if schools act on general categorical assumptions, rather than diagnosing the specific learning needs of individual students and using that information to provide targeted academic support or more personalized learning experiences, the support they provide to students may be less useful or effective.



Credits are one of the primary methods used to determine and document that students have met academic requirements, generally at the high school level. Credits are awarded upon completing and passing a course or required school program. In the United States, credits are often based on the Carnegie unit, or 120 hours of instructional time (one hour of instruction a day, five days a week, for 24 weeks). However, the actual duration of credit-bearing courses may differ significantly from the Carnegie-unit standard.

Most public high schools require students to accumulate credits to earn a diploma. While schools and districts determine credit requirements, states require schools to have minimum credit requirements in place. For example, a state might require students to earn a minimum of 18 credits to be eligible for a high school diploma, but a school may choose to increase credit requirements to 24 credits or higher. While credit requirements vary from state to state and school to school, they generally outline minimum requirements in the following subject areas: English language arts, mathematics, social studies, science, health, physical education, technology, and world languages. Schools also typically require students to earn a certain number of “elective” credits as well, and elective courses can span a wide variety of subject areas, including those listed above. For a related discussion, see core course of study.


In recent years, the traditional course credit has become the object of reform, particularly as an extension of proficiency-based learning or of efforts to change assessment strategies, grading practices, graduation requirements, and core courses of study in schools. Some states have sought to raise educational expectations, increase instructional time in certain subject areas, and improve student preparation by raising minimum credit requirements. For example, state regulations may require public high school students to complete four “years” of English and math—the equivalent of four credits in each subject—but only two or three years of science and social studies. As a way to promote stronger student preparation in science and social studies, states may decide to increase credit requirements. Other subject areas, such as technology, health, or world language, for example, have also been subject to increases in minimum credit requirements. Districts and schools may also elect to increase credit requirements independently, and some education organizations have recommended stronger credit requirements as a strategy for promoting higher academic achievement and more prepared graduates. In effect, increasing credit requirements in a given subject area increases the amount of time students will be taught, which increases the likelihood that they will be better educated in that subject area.

Critics of course credit may argue, however, that credit-based systems allow students to pass courses, earn credits, and get promoted from one grade level to the next even though they may have not acquired essential knowledge and skills, or they may not be adequately prepared for the next grade or for higher-level courses. The credit is often cited as one of the reasons why some students can earn a high school diploma, for example, and yet still struggle with basic reading, writing, and math skills.

A term commonly associated with credit-related reforms is “seat time”—a reference to the 120-hour Carnegie unit upon which most course credits are based. The basic idea is that credits more accurately measure the amount of time students have been taught, rather than what they have actually learned or failed to learn. For example, one student may earn an A in a course, while another student earns a D, and yet both may earn credit for passing the course. Given that the two grades likely represent significantly different levels of learning acquisition, what does the credit actually represent? In addition, if the awarding of credit is not based on some form of consistently applied learning standards—expectations for what students should know and be able to do at a particular stage of their education—then it becomes difficult to determine what students have learned or failed to learn, further undermining the credit as a reliable measurement for learning acquisition and academic accomplishment.

Some educators and education reformers argue that strategies such as learning standards, proficiency-based learning, and demonstrations of learning, among others, provide more valid and reliable ways to determine what students have learned, whether they should be promoted to the next grade level, and whether they should receive a diploma.


Credits are a familiar, understandable concept and their use is so widespread that people have become accustomed to them, which may contribute to debates about course-credit reforms, given that some may question why something so universally used needs to be changed. That said, credits are more likely to be the indirect object of debates about related issues, such as learning standards, grading practices, or proficiency-based learning.

Some advocates might argue, for example, that credits are a simple, widely used way for schools to ensure that students receive a certain amount of instructional time in important subject areas. They may also point out that minimum credit requirements imposed by states have been effective in raising educational expectations and improving student preparation in critical subject areas.

Critics of credit-based systems will likely echo the points made above, questioning whether credits should be used at all given that they are an imprecise way to measure learning acquisition and academic accomplishment. Credits, they may contend, provide a false sense of security: while having earned credit make it appear that students are learning—i.e., they have passed courses—credits may in fact be misleading and misrepresentative, since students are often able to earn credit even though they have failed to learn what the course was intended to teach. To detractors, schools should instead be measuring what students have learned or not learned—using time-based requirements such as credits, rather than learning-acquisition requirements such as learning standards, will simply allow students to continue passing courses, moving onto the next grade level, and graduating even though they may lack important knowledge and skills.

Personal Learning Plan


A personal learning plan (or PLP) is developed by students—typically in collaboration with teachers, counselors, and parents—as a way to help them achieve short- and long-term learning goals, most commonly at the middle school and high school levels. Personal learning plans are generally based on the belief that students will be more motivated to learn, will achieve more in school, and will feel a stronger sense of ownership over their education if they decide what they want to learn, how they are going to learn it, and why they need learn it to achieve their personal goals.

While personal learning plans may take a wide variety of forms from school to school, they tend to share many common features. For example, when developing their plans, students may be asked to do any or all of the following:

  • Think about and describe their personal life aspirations, particularly their collegiate and career goals.
  • Self-assess their individual learning strengths and weaknesses, or reflect on what they have academically achieved, excelled at, or struggled with in the past.
  • Identify specific learning gaps or skill deficiencies that should be addressed in their education, or specific knowledge, skills, and character traits they would like to acquire.
  • List or describe their personal interests, passions, pursuits, and hobbies, and identify ways to integrate those interests into their education.
  • Chart a personal educational program that will allow them to achieve their educational and aspirational goals while also fulfilling school requirements, such as particular learning standards or credit and course requirements for graduation.
  • Document major learning accomplishments or milestones.

The general goal of a personal learning plan is to bring greater coherence, focus, and purpose to the decisions students make about their education. For this reason, plans may also include learning experiences that occur outside of the school, such as internships, volunteer opportunities, and summer programs students want to pursue or books they would like to read. For a related discussion, see learning pathway.

To help students develop personal learning plans, educators typically create a template form and process, such as a series of questions or a multiyear course-planning chart that allows students to map out the specific classes they want to take before graduating. Personal learning plans may help engage parents in the planning process and in substantive discussions with their children about their life goals and educational interests, while also helping teachers learn more about their students and their particular interests and learning needs. Personal learning plans are commonly revisited and modified annually to reflect changes in student learning needs, interests, and aspirations.

The use of personal learning plans in schools may be required or encouraged by state policies and departments of education, and districts and schools may require students to create a personal learning plan. Personal learning plans are distinct from individualized education programs (or IEPs), which are federally mandated plans created for students who receive special-education services. For these students, an individualized education program may also serve as their personal learning plan.


Personal learning plans may accompany a wide variety of school-reform strategies and philosophies, including differentiation, personalized learning, relevance, student-centered learning, and voice, among others (to more fully understand the rationale motivating the use of personal learning plans as a reform strategy, we recommend reading these entries). In many cases, the completion, monitoring, and modification of personal learning plans takes place in advisories—regularly scheduled periods of time during which teachers meet with small groups of students for the purpose of advising them on academic, social, and future-planning issues.

Schools may use personal learning plans to achieve a wide variety of educational goals, including the following representative examples:

  • They want students to take greater responsibility for their education, be more thoughtful and goal oriented about the educational choices they make, and use their time in school more purposefully.
  • They want teachers to have a better understanding of the interests, learning needs, and aspirations of their students so they can use that information to teach and support them more effectively.
  • They want students to challenge themselves and consider learning opportunities they may not have considered otherwise.
  • They want parents to be more engaged in planning their child’s education and more informed about their child’s interests, learning needs, and aspirations.
  • They want students to have a clear direction in their education so that they meet expected learning standards and graduate prepared for higher education and careers.


While the concept is rarely seen as controversial, skepticism, criticism, and debate may arise if personal learning plans are viewed as burdensome, add-on requirements rather than as central organizing tools for a student’s academic career. Personal learning plans may also be viewed negatively if they are poorly designed, if they tend to be filed away and forgotten, if they are not acted upon by students, if they are not meaningfully integrated into the school’s academic program, or if educators ignore the interests, desires, and aspirations expressed by students. In other words, how personal learning plans are actually used or not used in schools, and whether they produce the desired educational results, will likely determine how they are perceived.

Asynchronous Learning


Asynchronous learning is a general term used to describe forms of education, instruction, and learning that do not occur in the same place or at the same time. The term is most commonly applied to various forms of digital and online learning in which students learn from instruction—such as prerecorded video lessons or game-based learning tasks that students complete on their own—that is not being delivered in person or in real time. Yet asynchronous learning may also encompass a wide variety of instructional interactions, including email exchanges between teachers, online discussion boards, and course-management systems that organize instructional materials and correspondence, among many other possible variations.

Digital and online learning experiences can also be synchronous. For example, educational video conferences, interactive webinars, chat-based online discussions, and lectures that are broadcast at the same time they given would all be considered forms of synchronous learning.

It should be noted that the term asynchronous learning is typically applied to teacher-student or peer-to-peer learning interactions that are happening in different locations or at different times, rather than to online learning experiences that do not involve an instructor, colleague, or peer. For example, the popular language-learning software Rosetta Stone is often purchased and used by individuals who want to acquire new language skills, but it is also increasingly used by world-language teachers in schools. When teachers use the software as an instructional tool to enhance language acquisition or diagnose learning weaknesses, this process would typically be considered a form of asynchronous learning. If someone uses the software on their own—i.e., without additional instruction or support from a teacher, and not as an extension of a formal course—it would likely not be considered asynchronous learning.

When teachers instruct students who are in the same classroom or learning environment, the term “in-person learning” may be applied.

For a related discussion, see blended learning.


Test Bias


Educational tests are considered biased if a test design, or the way results are interpreted and used, systematically disadvantages certain groups of students over others, such as students of color, students from lower-income backgrounds, students who are not proficient in the English language, or students who are not fluent in certain cultural customs and traditions. Identifying test bias requires that test developers and educators determine why one group of students tends to do better or worse than another group on a particular test. For example, is it because of the characteristics of the group members, the environment in which they are tested, or the characteristics of the test design and questions? As student populations in public schools become more diverse, and tests assume more central roles in determining individual success or access to opportunities, the question of bias—and how to eliminate it—has grown in importance.

There are a few general categories of test bias:

  • Construct-validity bias refers to whether a test accurately measures what it was designed to measure. On an intelligence test, for example, students who are learning English will likely encounter words they haven’t learned, and consequently test results may reflect their relatively weak English-language skills rather than their academic or intellectual abilities.
  • Content-validity bias occurs when the content of a test is comparatively more difficult for one group of students than for others. It can occur when members of a student subgroup, such as various minority groups, have not been given the same opportunity to learn the material being tested, when scoring is unfair to a group (for example, the answers that would make sense in one group’s culture are deemed incorrect), or when questions are worded in ways that are unfamiliar to certain students because of linguistic or cultural differences. Item-selection bias, a subcategory of this bias, refers to the use of individual test items that are more suited to one group’s language and cultural experiences.
  • Predictive-validity bias (or bias in criterion-related validity) refers to a test’s accuracy in predicting how well a certain student group will perform in the future. For example, a test would be considered “unbiased” if it predicted future academic and test performance equally well for all groups of students.

Test bias is closely related to the issue of test fairness—i.e., do the social applications of test results have consequences that unfairly advantage or disadvantage certain groups of student? College-admissions exams often raise concerns about both test bias and test fairness, given their significant role in determining access to institutions of higher education, especially elite colleges and universities. For example, female students tend to score lower than males (possibly because of gender bias in test design), even though female students tend to earn higher grades in college on average (which possibly suggests evidence of predictive-validity bias).

To cite another example, there is evidence of a consistent connection between family income and scores on college-admissions exams, with higher-income students, on average, outscoring lower-income students. The fact that students can boost their scores considerably with tutoring or test coaching adds to the perception of socioeconomic unfairness, given that test preparation classes and services may be prohibitively expensive for many students. (Concerns about bias and unfairness are one contributing factor in a trend toward “test-optional” or “test-flexible” collegiate admissions policies.)

The following are several representative examples of other factors that can give rise to test bias:

  • If the staff developing a test is not demographically or culturally representative of the students who will take the test, test items may reflect inadvertent bias. For example, if test developers are predominantly white, upper-middle-class males, the resulting test could, due to cultural oversights, advantage demographically similar test takers and disadvantage others.
  • Norm-referenced tests (or tests designed to compare and rank test takers in relation to one another) may be biased if the “norming process” does not include representative samples of all the tested subgroups. For example, if test developers do not include linguistically, culturally, and socioeconomically diverse students in the initial comparison groups (which are used to determine the norms used in the test), the resulting test could potentially disadvantage excluded groups.
  • Certain test formats may have an inherent bias toward some groups of students, at the expense of others. For example, evidence suggests that timed, multiple-choice tests may favor certain styles of thinking more characteristic of males than females, such as a willingness to risk guessing the right answer or questions that reflect black-and-white logic rather than nuanced logic.
  • The choice of language in test questions can introduce bias, for example, if idiomatic cultural expressions—such as “an old flame” or “an apples-and-oranges comparison”—are used that may be unfamiliar to recently arrived immigrant students who may not yet be proficient in the English language or in American cultural references.
  • Tests may be considered biased if they include references to cultural details that are not familiar to particular student groups. For example, a student who recently immigrated from the Caribbean may never have experienced winter, snow, or a snow-related school cancellation, and may therefore be thrown off by an essay question asking him or her to describe a snow-day experience.
  • Another aspect of culturally biased testing is implicated in the overrepresentation of black students, especially black males, in special-education programs. For example, the concern is that the tests used to identify students with disabilities, including intelligence tests, are misidentifying black students as learning disabled because of inherent racial and cultural biases.


As with measurement error, some degree of bias and unfairness in testing may be unavoidable. The inevitability of test bias and unfairness are among the reasons that many test developers and testing experts caution against making important educational decisions based on a single test result. The Standards for Educational and Psychological Testing—a set of proposed guidelines jointly developed by the American Educational Research Association, American Psychological Association, and the National Council on Measurement in Education—include a recommendation that “in elementary or secondary education, a decision or characterization that will have a major impact on a test taker should not automatically be made on the basis of a single score.”

Given the fact that test results continue to be widely used when making important decisions about students, test developers and experts have identified a number of strategies that can reduce, if not eliminate, test bias and unfairness. A few representative examples include:

  • Striving for diversity in test-development staffing, and training test developers and scorers to be aware of the potential for cultural, linguistic, and socioeconomic bias.
  • Having test materials reviewed by experts trained in identifying cultural bias and by representatives of culturally and linguistically diverse subgroups.
  • Ensuring that norming processes and sample sizes used to develop norm-referenced tests are inclusive of diverse student subgroups and large enough to constitute a representative sample.
  • Eliminating items that produce the largest racial and cultural performance gaps, and selecting items that produce the smallest gaps—a technique known as “the golden rule.” (This particular strategy may be logistically difficult to achieve, however, given the number of racial, ethnic, and cultural groups that may be represented in any given testing population).
  • Screening for and eliminating items, references, and terms that are more likely to be offensive to certain groups.
  • Translating tests into a test taker’s native language or using interpreters to translate test items.
  • Including more “performance-based” items to limit the role that language and word-choice plays in test performance.
  • Using multiple assessment measures to determine academic achievement and progress, and avoiding the use of test scores, in exclusion of other information, to make important decisions about students.

Career and Technical Education


Career and technical education is a term applied to schools, institutions, and educational programs that specialize in the skilled trades, applied sciences, modern technologies, and career preparation. It was formerly (and is still commonly) called vocational education; however, the term has fallen out of favor with most educators.

Career and technical programs frequently offer both academic and career-oriented courses, and many provide students with the opportunity to gain work experience through internships, job shadowing, on-the-job training, and industry-certification opportunities. Career and technical programs—depending on their size, configuration, location, and mission—provide a wide range of learning experiences spanning many different career tracks, fields, and industries, from skilled trades such as automotive technology, construction, plumbing, or electrical contracting to fields as diverse as agriculture, architecture, culinary arts, fashion design, filmmaking, forestry, engineering, healthcare, personal training, robotics, or veterinary medicine.

Career and technical education may be offered in middle schools and high schools or through community colleges and other postsecondary institutions and certification programs. At the secondary level, career and technical education is often provided by regional centers that serve students from multiple schools or districts. For example, the Boards of Cooperative Educational Services in New York administers a network of 37 regional career and technical education centers that serve students throughout the state. Many states have similar regional centers or statewide networks that operate as part of the public-school system.

In some cases, career and technical education is provided through a high school, where it may or may not be an integrated part of the school’s regular academic program. Students may also attend separate career and technical institutions for part of the school day, or a regional center may be the primary school of enrollment, where students take both academic and career and technical courses. In other cases, career and technical programs may take the form of a distinct “school within a school,” such as a theme-based academy, that offers an interdisciplinary or career-oriented program in which academic coursework is aligned with specific career paths, such as culinary arts, nursing, or engineering.


Some educators and school-reform advocates argue that career and technical education is an underutilized learning pathway that could help to increase the educational engagement, achievement, and attainment of students who are not excelling in more traditional academic programs. The practical learning experiences that are often provided in career and technical programs appeal to many students, and certain common elements—the focus on critical thinking, new technologies, real-world settings, hands-on activities, and the application of learning to practical problems, for example—align with a growing emphasis on 21st century skills—skills that are relevant to all academic subject areas and that can be applied in educational, career, and civic contexts throughout a student’s life. Advocates may also argue that career and technical education programs are an antidote to some of the weaknesses of traditional academic programs. For example, rather than learning from books, taking tests, and discussing abstract concepts in classrooms, students gain practical, relevant, marketable skills that will them more employable adults after graduation.

Over the past few decades, learning expectations for career and technical education have risen significantly, largely in response to the increasing sophistication of modern careers that are demanding higher levels of education, training, and skill from the workforce. For instance, yesterday’s “auto mechanics” are today’s “automotive technicians,” and automotive programs now routinely provide training in the use of advanced computerized diagnostic equipment in addition to more traditional mechanical repairs. Students enrolled at career and technical centers, which are typically secondary-level public schools, are required to meet the same learning standards that apply to students in public high schools. In addition to state-required learning standards that apply to public schools, many states have developed standards specific to career and technical programs.


In the United States, career and technical education is often stigmatized, and there is a widespread perception that career and technical centers provide a lower quality education or that students who attend such schools are less capable or have lower aspirations. At least in part, these perceptions are lingering stereotypes associated with traditional “vocational” programs of past decades. There is no concrete evidence that such generalized perceptions and stereotypes are valid, and many studies have shown that students enrolled in career and technical programs can and do outperform students in more traditional academic settings.

Discussions about career and technical education also intersect with ongoing debates about academic “tracking,” or the sorting of students into tiered  courses based on past academic performance or perceived ability. Depending on its structure, academic requirements, and student demographics, a career and technical program can resemble an academic track in that certain types of students or certain educational outcomes may predominate. For example, lower-income students and minorities may be disproportionately represented in a program, or graduation rates and college-going rates may be markedly lower. Critics of tracking may argue that such results more than likely reflect the particular structure and culture the education system, rather than an accurate representation of the abilities and aspirations of the students enrolled in the programs.



A rubric is typically an evaluation tool or set of guidelines used to promote the consistent application of learning expectations, learning objectives, or learning standards in the classroom, or to measure their attainment against a consistent set of criteria. In instructional settings, rubrics clearly define academic expectations for students and help to ensure consistency in the evaluation of academic work from student to student, assignment to assignment, or course to course. Rubrics are also used as scoring instruments to determine grades or the degree to which learning standards have been demonstrated or attained by students.

In courses, rubrics may be provided and explained to students before they begin an assignment to ensure that learning expectations have been clearly communicated to and understood by students, and, by extension, parents or other adults involved in supporting a student’s education. Rubrics may take many forms, but they typically include the following information:

  • The educational purpose of an assignment, the rationale behind it, or how it connects to larger concepts or themes in a course.
  • The specific criteria or learning objectives that students must show proficiency in to successfully complete an assignment or meet expected standards. An oral-presentation rubric, for example, will establish the criteria—e.g., speak clearly, make eye contact, or include a description of the main characters, setting, and plot—on which students will be graded.
  • The specific quality standards the teacher will use when evaluating, scoring, or grading an assignment. For example, if the teacher is grading an assignment on a scale of 1 to 4, the rubric may detail what students need to do or demonstrate to earn a 1, 2, 3, or 4. Other rubrics will use descriptive language—does not meet, partially meets, meets, or exceeds the standard, for example—instead of a numerical score.

Rubrics are generally designed to be simple, explicit, and easily understood. Rubrics may help students see connections between learning (what will be taught) and assessment (what will be evaluated) by making the feedback they receive from teachers clearer, more detailed, and more useful in terms of identifying and communicating what students have learned or what they may still need to learn. Educators may use rubrics midway through an assignment to help students assess what they still need to do or demonstrate before submitting a final product. Rubrics may also encourage students to reflect on their own learning progress and help teachers to tailor instruction, academic support, or future assignments to address distinct learning needs or learning gaps. In some cases, students are involved in the co-creation of rubrics for a class project or for the purposes of evaluating their own work or that of their peers.

Since rubrics are used to establish a consistent set of learning expectations that all students need to demonstrate, they may also be used by school leaders and teachers as a way to maintain consistency and objectivity when teaching or assessing learning across grade levels, courses, or assignments. While some schools give individual teachers the discretion to create and use their own rubrics, other schools utilize “common rubrics” or “common assessments” to promote greater consistency in the application and evaluation of learning throughout a school. In most cases, common rubrics are collaboratively developed by a school faculty, academic department, or team. Some schools have common rubrics for academic subjects, while in other schools the rubrics are utilized across all the academic disciplines. Common rubrics and assessments can also help schools, departments, and teaching teams refine their lessons and instructional practices to target specific learning areas in which their students tend to struggle. Rubrics are often locally designed by a district or school, but they may be provided by outside organizations as part of a specific program or improvement model.

For related discussions, see coherent curriculum and high expectations.

Authentic Learning


In education, the term authentic learning refers to a wide variety of educational and instructional techniques focused on connecting what students are taught in school to real-world issues, problems, and applications. The basic idea is that students are more likely to be interested in what they are learning, more motivated to learn new concepts and skills, and better prepared to succeed in college, careers, and adulthood if what they are learning mirrors real-life contexts, equips them with practical and useful skills, and addresses topics that are relevant and applicable to their lives outside of school. For related discussions, see 21st century skills, relevance, and rigor.

An “authentic” way to teach the scientific method, for example, would be to ask students to develop a hypothesis about how ecosystems work that is based on first-hand observations of a local natural habitat, then have them design and conduct an experiment to prove or disprove the hypothesis. After the experiment is completed, students might then write up, present, and defend their findings to a panel of actual scientists. In contrast, a “less authentic” way to teach the scientific method would be to have students read about the concept in a textbook, memorize the prescribed process, and then take a multiple-choice test to determine how well they remember it.

In the “authentic” learning example above, students “learn by doing,” and they acquire the foundational skills, knowledge, and understanding that working scientists actually need and use in their profession. In this case, students would also learn related skills such as critical thinking, problem solving, formal scientific observation, note taking, research methods, writing, presentation techniques, and public speaking, for example. In the “less authentic” learning situation, students acquire knowledge largely for purposes of getting a good grade on a test. As a result, students may be less likely to remember what they learned because the concept remains abstract, theoretical, or disconnected from first-hand experience. And since students were never required to use what they learned in a real-life situation, teachers won’t be able to determine if students can translate what they have learned into the practical skills, applications, and habits of mind that would be useful in life outside of school—such as in a future job, for example.

Another principle of authentic learning is that it mirrors the complexities and ambiguities of real life. On a multiple-choice science test there are “right” answers and “wrong” answers determined by teachers and test developers. But when it comes to actual scientific theories and findings, for example, there are often many potentially correct answers that may be extremely difficult, or even impossible, to unequivocally prove or disprove. For this reason, authentic learning tends to be designed around open-ended questions without clear right or wrong answers, or around complex problems with many possible solutions that could be investigated using a wide variety of methods. Authentic learning is also more likely to be “interdisciplinary,” given that life, understanding, and knowledge are rarely compartmentalized into subject areas, and as adults students will have to apply multiple skills or domains of knowledge in any given educational, career, civic, or life situation. Generally speaking, authentic learning is intended to encourage students to think more deeply, raise hard questions, consider multiple forms of evidence, recognize nuances, weigh competing ideas, investigate contradictions, or navigate difficult problems and situations.

In perhaps its purest expression, authentic learning culminates in students making some form of genuinely useful contribution to their community or to a field of study. The winners of the annual Google Science Fair, for example, would exemplify this ideal. In 2012, the Grand Prize winner, 17-year-old Brittany Wenger, created a software application—an “artificial neural network”—that successfully diagnosed breast cancer in 99% of tested cases and that may potentially be put into use in hospitals in the future.

While few students will develop better ways to diagnose cancer, schools create authentic learning experiences in a variety of ways. For example, a science class might study water conservation, conduct an analysis of their school’s water usage, investigate potential ways the school might reduce its usage, and then present a water-conservation proposal to the school board that includes a variety of recommendations—e.g., posting signs in bathrooms encouraging students not to leave water running, installing low-flow faucets with automatic on-off sensors, using rain barrels below drain spouts, planting drought-resistant plants in the schoolyard that are watered using the collected rainwater, etc. Once these solutions are put into practice, students might conduct observations to calculate how much water the school conserves on a daily, weekly, or annual basis, and then develop a website, infographics, or videos to share the information with school leaders and the broader community.

Authentic learning is closely related to the concept and theory of “constructivist teaching,” and in some contexts it may be used synonymously. For a more detailed discussion, see the Wikipedia entry for constructivist teaching methods.


As a school-reform concept, authentic learning is related philosophically and pedagogically to strategies such as personalized learning, community-based learning, and project-based learning, among others. In addition, instructional strategies such as demonstrations of learning, capstone project, personal learning plans, and portfolios may be associated with authentic learning.

Authentic learning is also a central concept in educational reforms that call for schools to place a greater emphasis on skills that are used in all subject areas and that students can apply in all educational, career, and civic settings throughout their lives. It’s also a central concept in reforms that question how teachers have traditionally taught and what students should be learning—such as the 21st century skills movement, which broadly calls on schools to create academic programs and learning experiences that equip students with the most essential knowledge, skills, and dispositions they will need to be successful in the collegiate programs and modern workplaces of the 21st century. As higher education and job requirements become more competitive, complex, and technical, proponents argue, students will need the kinds of skills that authentic-learning experiences can provide to successfully navigate the modern world, excel in challenging careers, and process increasingly complex information.


Calls for “more authentic learning” in education are, generally speaking, a response to the perception that many public schools pay insufficient attention to developing the intellectual abilities, practical skills, work habits, and character traits required for success in adult life. In other words, the concept of “authentic learning” intersects with larger social debates about what public schools should be teaching and what the purpose of public education should be. For example: Is the purpose of public education to get students to pass a test or to earn a high school diploma? Or is the purpose to prepare students for success in life after graduation, including postsecondary education and modern jobs or career paths? Advocates of authentic learning may contend that the purpose of public education is to look beyond test scores or graduation rates—success in school—to the knowledge, skills, and character traits students actually need to succeed in adult life—success outside of school. For related discussions, see career-ready and college-ready.

In addition, authentic learning may also intersect with a variety of ongoing debates about how and what schools should teach. Critics may question whether authentic-learning experiences can cover enough academic content in the core subject areas to ensure that students acquire a broad, well-rounded knowledge base. Critics may also argue that authentic learning, and related instructional strategies, may displace more traditional yet effective forms of teaching, fail to equip students with “the basics,” or lead to disorderly classrooms, among other possible arguments. Advocates would contend, however, that these criticisms are unfounded, and that a well-planned curriculum built around authentic-learning experiences can cover all the academic subjects and concepts that students need (unless the learning experiences are poorly designed and executed, of course). In some cases, criticism arises in response to a negative experience with authentic learning or from an insufficient understanding of the concept.

Authentic learning may also place more burdens—both logistical and instructional—on teachers. For example, authentic learning may require significantly more planning and preparation, and teachers may need to acquire new and more sophisticated instructional techniques or substantially revise lesson plans they have used for years. Authentic learning may also introduce more logistical complexities, particularly when learning experiences take place outside of the school or classroom (in schools, even seemingly minor logistical tasks, such as making travel arrangements or securing parental permissions, can take up a lot of time). For a related discussion, see learning pathway.



The term curriculum refers to the lessons and academic content taught in a school or in a specific course or program. In dictionaries, curriculum is often defined as the courses offered by a school, but it is rarely used in such a general sense in schools. Depending on how broadly educators define or employ the term, curriculum typically refers to the knowledge and skills students are expected to learn, which includes the learning standards or learning objectives they are expected to meet; the units and lessons that teachers teach; the assignments and projects given to students; the books, materials, videos, presentations, and readings used in a course; and the tests, assessments, and other methods used to evaluate student learning. An individual teacher’s curriculum, for example, would be the specific learning standards, lessons, assignments, and materials used to organize and teach a particular course.

When the terms curriculum or curricula are used in educational contexts without qualification, specific examples, or additional explanation, it may be difficult to determine precisely what the terms are referring to—mainly because they could be applied to either all or only some of the component parts of a school’s academic program or courses.

In many cases, teachers develop their own curricula, often refining and improving them over years, although it is also common for teachers to adapt lessons and syllabi created by other teachers, use curriculum templates and guides to structure their lessons and courses, or purchase prepackaged curricula from individuals and companies. In some cases, schools purchase comprehensive, multigrade curriculum packages—often in a particular subject area, such as mathematics—that teachers are required to use or follow. Curriculum may also encompass a school’s academic requirements for graduation, such as the courses students have to take and pass, the number of credits students must complete, and other requirements, such as completing a capstone project or a certain number of community-service hours. Generally speaking, curriculum takes many different forms in schools—too many to comprehensively catalog here.

It is important to note that while curriculum encompasses a wide variety of potential educational and instructional practices, educators often have a very precise, technical meaning in mind when they use the term. Most teachers spend a lot of time thinking about, studying, discussing, and analyzing curriculum, and many educators have acquired a specialist’s expertise in curriculum development—i.e., they know how to structure, organize, and deliver lessons  in ways that facilitate or accelerate student learning. To noneducators, some curriculum materials may seem simple or straightforward (such as a list of required reading, for example), but they may reflect a deep and sophisticated understanding of an academic discipline and of the most effective strategies for learning acquisition and classroom management.

For a related discussion, see hidden curriculum.


Since curriculum is one of the foundational elements of effective schooling and teaching, it is often the object of reforms, most of which are broadly intended to either mandate or encourage greater curricular standardization and consistency across states, schools, grade levels, subject areas, and courses. The following are a few representative examples of the ways in which curriculum is targeted for improvement or used to leverage school improvement and increase teacher effectiveness:

  • Standards requirements: When new learning standards are adopted at the state, district, or school levels, teachers typically modify what they teach and bring their curriculum into “alignment” with the learning expectations outlined in the new standards. While the technical alignment of curriculum with standards does not necessarily mean that teachers are teaching in accordance with the standards—or, more to the point, that students are actually achieving those learning expectations—learning standards remain a mechanism by which policy makers and school leaders attempt to improve curriculum and teaching quality. The Common Core State Standards Initiative, for example, is a national effort to influence curriculum design and teaching quality in schools through the adoption of new learning standards by states.
  • Assessment requirements: Another reform strategy that indirectly influences curriculum is assessment, since the methods used to measure student learning compel teachers to teach the content and skills that will eventually be evaluated. The most commonly discussed examples are standardized testing and high-stakes testing, which can give rise to a phenomenon informally called “teaching to the test.” Because federal and state policies require students to take standardized tests at certain grade levels, and because regulatory penalties or negative publicity may result from poor student performance (in the case of high-stakes tests), teachers are consequently under pressure to teach in ways that are likely to improve student performance on standardized tests—e.g., by teaching the content likely to be tested or by coaching students on specific test-taking techniques. While standardized tests are one way in which assessment is used to leverage curriculum reform, schools may also use rubrics and many other strategies to improve teaching quality through the modification of assessment strategies, requirements, and expectations.
  • Curriculum alignment: Schools may try to improve curriculum quality by bringing teaching activities and course expectations into “alignment” with learning standards and other school courses—a practice sometimes called “curriculum mapping.” The basic idea is to create a more consistent and coherent academic program by making sure that teachers teach the most important content and eliminate learning gaps that may exist between sequential courses and grade levels. For example, teachers may review their mathematics program to ensure that what students are actually being taught in every Algebra I course offered in the school not only reflects expected learning standards for that subject area and grade level, but that it also prepares students for Algebra II and geometry. When the curriculum is not aligned, students might be taught significantly different content in each Algebra I course, for example, and students taking different Algebra I courses may complete the courses unevenly prepared for Algebra II. For a more detailed discussion, see coherent curriculum.
  • Curriculum philosophy: The design and goals of any curriculum reflect the educational philosophy—whether intentionally or unintentionally—of the educators who developed it. Consequently, curriculum reform may occur through the adoption of a different philosophy or model of teaching by a school or educator. Schools that follow the Expeditionary Learning model, for example, embrace a variety of approaches to teaching generally known as project-based learning, which encompasses related strategies such as community-based learning and authentic learning. In Expeditionary Learning schools, students complete multifaceted projects called “expeditions” that require teachers to develop and structure curriculum in ways that are quite different from the more traditional approaches commonly used in schools.
  • Curriculum packages: In some cases, schools decide to purchase or adopt a curriculum package that has been developed by an outside organization. One well-known and commonly used option for American public schools is International Baccalaureate, which offers curriculum programs for elementary schools, middle schools, and high schools. Districts may purchase all three programs or an individual school may purchase only one, and the programs may be offered to all or only some of the students in a school. When schools adopt a curriculum package, teachers often receive specialized training to ensure that the curriculum is effectively implemented and taught. In many cases, curriculum packages are purchased or adopted because they are perceived to be of a higher quality or more prestigious than the existing curriculum options offered by a school or independently developed by teachers.
  • Curriculum resources: The resources that schools provide to teachers can also have a significant affect on curriculum. For example, if a district or school purchases a certain set of textbooks and requires teachers to use them, those textbooks will inevitably influence what gets taught and how teachers teach. Technology purchases are another example of resources that have the potential to influence curriculum. If all students are given laptops and all classrooms are outfitted with interactive whiteboards, for example, teachers can make significant changes in what they teach and how they teach to take advantage of these new technologies (for a more detailed discussion of this example, see one-to-one). In most cases, however, new curriculum resources require schools to invest in professional development that helps teachers use the new resources effectively, given that simply providing new resources without investing in teacher education and training may fail to bring about desired improvements. In addition, the type of professional development provided to teachers can also have a major influence on curriculum development and design.
  • Curriculum standardization: States, districts, and schools may also try to improve teaching quality and effectiveness by requiring, or simply encouraging, teachers to use either a standardized curriculum or common processes for developing curriculum. While the strategies used to promote more standardized curricula can vary widely from state to state or school to school, the general goal is to increase teaching quality through greater curricular consistency. School performance will likely improve, the reasoning goes, if teaching methods and learning expectations are based on sound principles and consistently applied throughout a state, district, or school. Curriculum standards may also be created or proposed by influential educational organizations—such as the National Science Teachers Association or the National Council of Teachers of Mathematics, for example—with the purpose of guiding learning expectations and teaching within particular academic disciplines.
  • Curriculum scripting: Often called “scripted curriculum,” the scripting of curriculum is the most prescriptive form of standardized, prepackaged curriculum, since it typically requires teachers to not only follow a particular sequence of preprepared lessons, but to actually read aloud from a teaching script in class. While the professional autonomy and creativity of individual teachers may be significantly limited when such a curriculum system is used, the general rationale is that teaching quality can be assured or improved, or at least maintained, across a school or educational system if teachers follow a precise instructional script. While not every teacher will be a naturally excellent teacher, the reasoning goes, all teachers can at least be given a high-quality curriculum script to follow. Scripted curricula tend to be most common in districts and schools that face significant challenges attracting and retaining experienced or qualified teachers, such as larger urban schools in high-poverty communities.

Professional Development


In education, the term professional development may be used in reference to a wide variety of specialized training, formal education, or advanced professional learning intended to help administrators, teachers, and other educators improve their professional knowledge, competence, skill, and effectiveness. When the term is used in education contexts without qualification, specific examples, or additional explanation, however, it may be difficult to determine precisely what “professional development” is referring to.

In practice, professional development for educators encompasses an extremely broad range of topics and formats. For example, professional-development experiences may be funded by district, school, or state budgets and programs, or they may be supported by a foundation grant or other private funding source. They may range from a one-day conference to a two-week workshop to a multiyear advanced-degree program. They may be delivered in person or online, during the school day or outside of normal school hours, and through one-on-one interactions or in group situations. And they may be led and facilitated by educators within a school or provided by outside consultants or organizations hired by a school or district. And, of course, the list of possible formats could go on.

The following are a representative selection of common professional-development topics and objectives for educators:

  • Furthering education and knowledge in a teacher’s subject area—e.g., learning new scientific theories, expanding knowledge of different historical periods, or learning how to teach subject-area content and concepts more effectively.
  • Training or mentoring in specialized teaching techniques that can be used in many different subject areas, such as differentiation (varying teaching techniques based on student learning needs and interests) or literacy strategies (techniques for improving reading and writing skills), for example.
  • Earning certification in a particular educational approach or program, usually from a university or other credentialing organization, such as teaching Advanced Placement courses or career and technical programs that culminate in students earning an industry-specific certification.
  • Developing technical, quantitative, and analytical skills that can be used to analyze student-performance data, and then use the findings to make modifications to academic programs and teaching techniques.
  • Learning new technological skills, such as how to use interactive whiteboards or course-management systems in ways that can improve teaching effectiveness and student performance.
  • Improving fundamental teaching techniques, such as how to manage a classroom effectively or frame questions in ways that elicit deeper thinking and more substantive answers from students.
  • Working with colleagues, such as in professional learning communities, to develop teaching skills collaboratively or create new interdisciplinary courses that are taught by teams of two or more teachers.
  • Developing specialized skills to better teach and support certain populations of students, such as students with learning disabilities or students who are not proficient in English.
  • Acquiring leadership skills, such as skills that can be used to develop and coordinate a school-improvement initiative or a community-volunteer program. For related discussions, see leadership team and shared leadership.
  • Pairing new and beginning teachers with more experienced “mentor teachers” or “instructional coaches” who model effective teaching strategies, expose less-experienced teachers to new ideas and skills, and provide constructive feedback and professional guidance.
  • Conducting action research to gain a better understanding of what’s working or not working in a school’s academic program, and then using the findings to improve educational quality and results.
  • Earning additional formal certifications, such as the National Board for Professional Teaching Standards certification, which requires educators to spend a considerable amount of time recording, analyzing, and reflecting on their teaching practice (many states provide incentives for teachers to obtain National Board Certification).
  • Attending graduate school to earn an advanced degree, such as a master’s degree or doctorate in education, educational leadership, or a specialized field of education such as literacy or technology.


In recent years, state and national policies have focused more attention on the issue of “teacher quality”—i.e., the ability of individual teachers or a teaching faculty to improve student learning and meet expected standards for performance. The No Child Left Behind Act, for example, provides a formal definition of what constitutes high-quality professional development and requires schools to report the percentage of their teaching faculty that meet the law’s definition of a “highly qualified teacher.” The law maintains that professional development should take the form of a “comprehensive, sustained, and intensive approach to improving teachers’ and principals’ effectiveness in raising student achievement.” Similar policies that describe professional-development expectations or require teachers to meet certain expectations for professional development may be in place at the state, district, and school levels across the country, although the design and purpose of these policies may vary widely from place to place.

Generally speaking, professional development is considered to be the primary mechanism that schools can use to help teachers continuously learn and improve their skills over time. And in recent decades, the topic has been extensively researched and many strategies and initiatives have been developed to improve the quality and effectiveness of professional development for educators. While theories about professional development abound, a degree of consensus has emerged on some of the major features of effective professional development. For example, one-day workshops or conferences that are not directly connected to a school’s academic program, or to what teachers are teaching, are generally considered to be less effective than training and learning opportunities that are sustained over longer periods of time and directly connected to what schools and teachers are actually doing on a daily basis. Terms and phases such as sustained, intensive, ongoing, comprehensive, aligned, collaborative, continuous, systemic, or capacity-building, as well as relevant to teacher work and connected to student learning, are often used in reference to professional development that is considered to be of higher quality. That said, there are a wide variety of theories about what kinds of professional development are most effective, as well as divergent research findings.


While few educators would argue against the need for and importance of professional development, specific programs and learning opportunities may be criticized or debated for any number of reasons, especially if the professional development is poorly designed, executed, scheduled, or facilitated, or if teachers feel that it is irrelevant to their teaching needs and day-to-day professional responsibilities, among many other possible causes.

In addition, school leaders may encounter a variety of challenges when selecting and providing professional development opportunities. For example, one common obstacle is finding adequate time during the school day for teachers to participate in professional development. Securing sufficient funding is another common complication, particularly during times when school budgets are tight or being cut. The amount of funding allocated for professional development by states, districts, and schools may also vary widely—some schools could have access to more professional-development funding than they can reasonably use in a given year, while other schools and teachers may be expected to fund most or all of their professional development on their own. Other common challenges include insufficient support for professional development from the administrative leadership, a lack of faculty interest or motivation, or overburdened teacher workloads.

Synchronous Learning


Synchronous learning is a general term used to describe forms of education, instruction, and learning that occur at the same time, but not in the same place. The term is most commonly applied to various forms of televisual, digital, and online learning in which students learn from instructors, colleagues, or peers in real time, but not in person. For example, educational video conferences, interactive webinars, chat-based online discussions, and lectures that are broadcast at the same time they delivered would all be considered forms of synchronous learning.

Digital and online learning experiences can also be asynchronous—i.e., instruction and learning occur not only in different locations, but also at different times. For example, prerecorded video lessons, email exchanges between teachers and students, online discussion boards, and course-management systems that organize instructional materials and related correspondence would all be considered forms of asynchronous learning.

Before the development and widespread adoption of interactive, internet-based technologies, synchronous learning was more commonly called distance education or distance learning—and these terms are still used today. While distance learning took many different forms, instructional interactions were often conducted over radio and, later on, closed-circuit television systems. The televisual systems were comparatively expensive, since schools needed classrooms outfitted with a variety of audiovisual technologies—video cameras, microphones, televisions, etc.—and instructional interactions could only occur between properly equipped classrooms that used compatible systems. While distance learning was used in a wide variety of educational settings, it was often employed by smaller schools, rural schools, and other education programs that did not have the funding or resources needed to hire teachers in specialized areas or provide a broad selection of specialized courses—e.g., courses in Chinese language or Japanese history. In these cases, schools may have used, and may still use, distance- and asynchronous-learning technology to expand course offerings for students or share teachers with specialized expertise.

When teachers instruct students who are in the same classroom or learning environment, the term “in-person learning” may be applied.

For a related discussion, see blended learning.

Score Inflation


Score inflation results when student scores on tests or other assessments increase but the increase does not reflect any genuine improvements in learning—i.e., the instrument being used to measure learning acquisition and growth is providing a false reading because (1) the testing design or processes are flawed or (2) educators are inadvertently or intentionally inflating student scores. Score inflation has been compared to holding a lit match to a thermometer in a cold room: while the thermometer reading indicates that the temperature is rising, the room remains cold.

There are two main problems associated with score inflation: (1) students appear to be improving academically when they’re not, and they may consequently not receive the additional instruction, attention, and academic support they need to improve and succeed, and (2) elected officials, policy makers, parents, and the public are given the misleading impression that schools are improving or performing adequately when in fact performance may be stagnating or even deteriorating.

There are a number of educational practices that can contribute to score inflation, and although some may be sanctioned, or even encouraged, by principals and other school leaders, many are generally recognized as cheating. While score inflation can be caused by a variety of factors, it is primarily associated with high-stakes testing. The following are a few representative examples of the ways in which scores may be misleadingly high or artificially inflated:

  • If teachers are under pressure to improve student scores on high-stakes tests—because there is a risk that low scores will lead to sanctions, bad publicity, or withheld bonuses, for example—they may “teach to the test” by drilling students in test-preparation strategies and focusing narrowly on topics and questions that are expected to be on a test. In some cases, educators have even been caught cheating. For example, school administrators and teachers may gain access to test questions and review them in advance, they may display correct answers on a blackboard during the administration of a test, they may systematically change incorrect answers to the correct ones, or they may expel historically low-performing students before they can take a standardized test to increase overall school-performance scores.
  • Test questions can be made easier. For example, a test administered in eleventh grade may reflect an eighth-grade learning level, or the performance level considered to be “passing” or “proficient” may be lowered to manufacture the perception that underperforming students are achieving at expected levels.
  • Students can be given extra time to complete tests, or they may receive some other form of “help” from adults during the testing period.
  • Either on their own initiative or at the direction of administrators, teachers may provide intensive instruction and academic support to a smaller group of students who are deemed most likely to improve their scores enough meet expected benchmarks for improvement. If the intensively supported students improve from just below to just above the cutoff score for “proficiency” on a test, for example, it can help a school meet improvement expectations—at least technically—and avoid negative consequences, even though the learning needs of other students in the class may be neglected.


The growing use of high-stakes testing has increased both discussion of and debate about score inflation in public education. Proponents of high-stakes testing tend to argue that the strategy motivates teachers and students to improve academic achievement, and the associated consequences help hold educators and schools accountable for improving educational results. Critics, however, may claim that the tests have created “perverse” incentives that will—perhaps inevitably—lead to problems such as score inflation, test manipulation, and cheating, rather than to genuine learning improvement.

While score inflation is largely seen as a negative phenomenon, some educators see little harm in assigning consequences to test results, and they may therefore be dismissive of score inflation, reasoning that, if the tests are well designed and they measure what students are expected learn, “teaching to the test” is a good thing—i.e., the practice is precisely what is needed, since it will help to ensure that students receive a high-quality education. In this case, test scores may not necessarily be considered “inflated” at all, since stronger test performance could be seen as “the goal” or as sufficient evidence that stronger learning improvement has been achieved.

Other educators may argue that high-stakes tests, and the resulting incentives to inflate scores, distort the fundamental purpose of education: rather than teaching students the most important knowledge and skills they will need in adult life, teachers are pressured to focus on test preparation and the more narrow range of knowledge and skills measured by tests. In this view, test preparation and success—as opposed to broader educational objectives, such as college and career preparation and success—are the implicit “goal” of education, and misleading test results may be widely accepted as sufficient evidence of success.



The term career-ready is generally applied to (1) students who are considered to be equipped with the knowledge and skills deemed to be essential for success in the modern workforce, or (2) the kinds of educational programs and learning opportunities that lead to improved workforce preparation. The career-ready concept is also related to 21st century skills and college-ready.


Calls for placing a greater emphasis on “career readiness” in public education are, generally speaking, a response to the perception that many public schools, particularly public high schools, pay insufficient attention to developing the intellectual abilities, practical skills, work habits, and character traits required for success in the workplace or in various professional career paths (many career and technical education programs, however, would be exceptions to this general view). In other words, “career-ready” has become a touchstone in a larger debate about what public schools should be teaching and what the purpose of public education should be. For example: Is the purpose of public education to get students to pass a test or to earn a high school diploma? Or is the purpose to prepare students for success in life after graduation, including postsecondary education and modern jobs or career paths? Advocates of career readiness, and the related concept of “college readiness,” would contend that the purpose of public education is to look beyond test scores or graduation rates—success in school—to the knowledge, skills, and aptitudes students actually need to succeed in adult life—success after school. A high school diploma, in this view, should certify readiness for post-graduation jobs and learning experiences, rather than merely the completion of secondary school.


Some educators are wary of the “career-ready” label, since they view it as a potential subversion of “college-ready”—the idea that students should be held to high academic expectations and graduate from public high school prepared to enroll in, succeed in, and graduate from two- or four-year collegiate programs. Others argue that college readiness should not be a universal goal in public education, since it may undervalue other post-graduation options, such as military careers or industry-certification programs, or that it may alienate, disadvantage, or stigmatize students who are not aspiring to a college education but planning to get a job after high school.

Still others argue that there is no real distinction between “career-ready” and “college-ready,” since students will need, or should be taught, the same skills and knowledge regardless of their future aspirations or post-graduation plans. In this case, career-ready versus college-ready “debate” is not only seen as misleading or unnecessarily confusing, but also as an artificial distinction that potentially give rise to the same educational inequities that concepts such as career-ready and college-ready were created to overturn—i.e., that college-preparatory programs will end up providing a high-quality education to some students, while career-preparation programs will provide a lower-quality or less-valuable education to others. In this case, the general argument is that all students should receive the best possible education regardless of what they may plan to do after graduating from high school, and that any attempt to create different educational tracks for “college-bound students” and “career-bound students” will, inevitably, lead to inequities and divergent educational quality. Since it is impossible to accurately predict any individual student’s future educational choices or career path (which may change dramatically from early adolescence to adulthood), the reasoning goes, schools should encourage the highest possible aspirations in students. In addition, some national surveys of college educators and employers have provided evidence that, when it comes to the knowledge and skills that both college instructors and prospective employers are looking for, career readiness and college readiness may be largely indistinguishable. Some surveys, for example, have found that incoming college students and younger employees not only have similar learning and preparation deficits, but that both college educators and employers are looking for similar knowledge, skills, and aptitudes, including the broad array of skills often called “21st century skills.” Advocates of erasing the distinction between career-ready and college-ready may recommend or use the phrase “college and career ready” as an alternative.

Some of this debate may stem from the stigma historically associated with “vocational education” programs, which were widely seen as having a lower status than the college-preparatory programs offered in regular high schools, in part because they were believed to be less demanding academically or because they were associated with “blue-collar” jobs. Yet many modern career and technical education programs, and more recent innovations such as theme-based academies and dual-enrollment programs, aim to integrate challenging academic preparation with career-related learning experiences, thereby rendering the distinction between “career-ready” and “college-ready” essentially moot.