Credits are one of the primary methods used to determine and document that students have met academic requirements, generally at the high school level. Credits are awarded upon completing and passing a course or required school program. In the United States, credits are often based on the Carnegie unit, or 120 hours of instructional time (one hour of instruction a day, five days a week, for 24 weeks). However, the actual duration of credit-bearing courses may differ significantly from the Carnegie-unit standard.
Most public high schools require students to accumulate credits to earn a diploma. While schools and districts determine credit requirements, states require schools to have minimum credit requirements in place. For example, a state might require students to earn a minimum of 18 credits to be eligible for a high school diploma, but a school may choose to increase credit requirements to 24 credits or higher. While credit requirements vary from state to state and school to school, they generally outline minimum requirements in the following subject areas: English language arts, mathematics, social studies, science, health, physical education, technology, and world languages. Schools also typically require students to earn a certain number of “elective” credits as well, and elective courses can span a wide variety of subject areas, including those listed above. For a related discussion, see core course of study.
In recent years, the traditional course credit has become the object of reform, particularly as an extension of proficiency-based learning or of efforts to change assessment strategies, grading practices, graduation requirements, and core courses of study in schools. Some states have sought to raise educational expectations, increase instructional time in certain subject areas, and improve student preparation by raising minimum credit requirements. For example, state regulations may require public high school students to complete four “years” of English and math—the equivalent of four credits in each subject—but only two or three years of science and social studies. As a way to promote stronger student preparation in science and social studies, states may decide to increase credit requirements. Other subject areas, such as technology, health, or world language, for example, have also been subject to increases in minimum credit requirements. Districts and schools may also elect to increase credit requirements independently, and some education organizations have recommended stronger credit requirements as a strategy for promoting higher academic achievement and more prepared graduates. In effect, increasing credit requirements in a given subject area increases the amount of time students will be taught, which increases the likelihood that they will be better educated in that subject area.
Critics of course credit may argue, however, that credit-based systems allow students to pass courses, earn credits, and get promoted from one grade level to the next even though they may have not acquired essential knowledge and skills, or they may not be adequately prepared for the next grade or for higher-level courses. The credit is often cited as one of the reasons why some students can earn a high school diploma, for example, and yet still struggle with basic reading, writing, and math skills.
A term commonly associated with credit-related reforms is “seat time”—a reference to the 120-hour Carnegie unit upon which most course credits are based. The basic idea is that credits more accurately measure the amount of time students have been taught, rather than what they have actually learned or failed to learn. For example, one student may earn an A in a course, while another student earns a D, and yet both may earn credit for passing the course. Given that the two grades likely represent significantly different levels of learning acquisition, what does the credit actually represent? In addition, if the awarding of credit is not based on some form of consistently applied learning standards—expectations for what students should know and be able to do at a particular stage of their education—then it becomes difficult to determine what students have learned or failed to learn, further undermining the credit as a reliable measurement for learning acquisition and academic accomplishment.
Some educators and education reformers argue that strategies such as learning standards, proficiency-based learning, and demonstrations of learning, among others, provide more valid and reliable ways to determine what students have learned, whether they should be promoted to the next grade level, and whether they should receive a diploma.
Credits are a familiar, understandable concept and their use is so widespread that people have become accustomed to them, which may contribute to debates about course-credit reforms, given that some may question why something so universally used needs to be changed. That said, credits are more likely to be the indirect object of debates about related issues, such as learning standards, grading practices, or proficiency-based learning.
Some advocates might argue, for example, that credits are a simple, widely used way for schools to ensure that students receive a certain amount of instructional time in important subject areas. They may also point out that minimum credit requirements imposed by states have been effective in raising educational expectations and improving student preparation in critical subject areas.
Critics of credit-based systems will likely echo the points made above, questioning whether credits should be used at all given that they are an imprecise way to measure learning acquisition and academic accomplishment. Credits, they may contend, provide a false sense of security: while having earned credit make it appear that students are learning—i.e., they have passed courses—credits may in fact be misleading and misrepresentative, since students are often able to earn credit even though they have failed to learn what the course was intended to teach. To detractors, schools should instead be measuring what students have learned or not learned—using time-based requirements such as credits, rather than learning-acquisition requirements such as learning standards, will simply allow students to continue passing courses, moving onto the next grade level, and graduating even though they may lack important knowledge and skills.