The
National Council on Teacher Quality, a Washington-based think tank, has
issued a number of reports in recent years on teacher preparation
around the country. Its flagship effort since 2013,
the Teacher Prep Review, is an annual report released in June that
rates programs on how well they are preparing new teachers. In order to
keep its name in front of the media between those major annual releases,
the council has issued a series of studies on other aspects of teacher
preparation. The latest one,
"Easy A’s and What’s Behind Them," came out this week. As with the
organization’s other studies, this one has fatal flaws that undermine
most of the conclusions articulated in it.
The study purports to rate whether teacher-preparation programs are lax in grading standards by examining the proportion of students in those programs achieving academic honors (e.g., high honors, or cum laude status) as compared to all undergraduates in the university. To do this, the organization obtained the spring 2012 commencement booklets from approximately 500 colleges and universities and then counted everything up. My first reaction when I heard about the report was, "Wow, pity those poor interns who had to sit and manually do this" (or perhaps the work was outsourced to India, who knows?).
A few days before the report was issued, the council emailed to let me know that our college "met the standard" for the report but did not include a copy of the report or an explanation of its methodology. The email did not even state whether meeting the standard meant that the proportion of honor students in teacher preparation was above, below, or at the same level as the university as a whole.
Upon receiving the report this week, I could see that to meet the standard, a teacher-preparation program had to have a proportion of its graduates that was within 10 percentage points of the proportion in the university as a whole. In other words, if 30 percent of students across the university received academic honors, the proportion for the teacher-preparation program had to range between 20 percent and 40 percent in order to meet the council’s standard.
At first glance, this may seem like a reasonable way to measure whether teacher-preparation programs are too lax in their grading standards. But the methodology is fraught with problems:
• The report states that, "Our evaluation of institutions on this standard measures the rigor of their preparation as indicated by the grade point average (GPA) differential between graduating teacher candidates and all other graduating students." In fact, the report does no such thing, because the council did not have access to students’ GPA’s. All it could determine was whether they earned honors or not. For example, here at Michigan State, the GPA cutoff for earning honors is 3.69, and approximately 20 percent of students achieve this level. But in examining student names in the commencement booklet, the council has no way of knowing whether someone who did not achieve honors had a GPA of 3.68 or 2.68, so it is impossible for it to calculate a "GPA differential" between any two groups of students.
• At Michigan State, all secondary-education students must have a major in the discipline in which they wish to teach, and thus, they are not listed as a College of Education student in our commencement booklet. Instead, they are listed with their home college. And in other college listings, there is no way to distinguish those students who are teacher candidates from other students in the same major. Thus, it is impossible to know, outside of graduates in elementary and special education, who all the teacher candidates are in the commencement booklet.
The council fudges on this problem by having an analysis category it defines as "less precise data," and uses different standards for institutions (including Michigan State) that fall into this less-precise data category. To compare between those institutions for which it had "precise data" and those with "less precise data," it examined a subsample of 29 institutions for which it had precise data, and graded them again using the standard for less-precise data. While it found a high correlation between the scores using the two methodologies, it still found that for six of the 29 institutions, the grade was different between one methodology and the other. Thus, these six institutions could potentially have been mislabeled (as having lax academic standards or not) depending on which methodology was used.
• There is an underlying assumption in the council’s methodology that if a teacher-preparation program has too many students graduating with honors, it must be due to grade inflation, rather than the possibility that the students are simply high achieving. Some may snicker at this possibility, but there is nothing in the report that disproves this alternative explanation (or even casts serious doubt on it).
In fact, buried in a footnote in the report is this statement: "For example, while lax grading standards could be part of the cause for the differential in honors, evidence does not suggest that this problem is worse in teacher preparation than elsewhere in an institution." This is a stunning admission that the central thesis of the report—"we find that in a majority of institutions (58 percent), grading standards for teacher candidates are much lower than for students in other majors on the same campus," as stated in the report’s executive summary—is not supported by any evidence.
• The council’s methodology also does not take into account that many education majors, even those in elementary education who are not also majoring in a discipline, take the majority of their courses outside of the teacher-preparation program. In Michigan, in order to be eligible for state certification as an elementary teacher, a student must earn a large number of credit hours in disciplinary courses.
At Michigan State, elementary-education candidates choose a focus area in language arts, social studies, integrated science, or mathematics. A student choosing the integrated-science option, for example, must complete 55 to 58 credit hours in courses from the departments of biology, chemistry, earth science, statistics, and mathematics. This includes required courses such as organismal and population biology, general and inorganic chemistry, and college algebra and trigonometry.
Few would consider those "gut" courses that earn a student an easy A grade—and remember, these are elementary-education majors. So the notion that lax grading in education courses leads to too many students with honors is, statistically, highly implausible.
While all of us in the teacher-preparation industry strive for high standards and are fine with being held accountable for achieving that, we believe that measuring us against those standards must be done in a methodologically sound fashion. The report by the National Council on Teacher Quality fails to do this, and thus should be discounted as a measure of just how many "easy A’s" are earned by education students across the country.
Donald E. Heller is dean of the College of Education and a professor in the department of educational administration at Michigan State University.
- See more at: http://m.chronicle.com/article/Easy-A-s-Gets-an-F/150025/#sthash.Tjyv4KXZ.dpuf
The study purports to rate whether teacher-preparation programs are lax in grading standards by examining the proportion of students in those programs achieving academic honors (e.g., high honors, or cum laude status) as compared to all undergraduates in the university. To do this, the organization obtained the spring 2012 commencement booklets from approximately 500 colleges and universities and then counted everything up. My first reaction when I heard about the report was, "Wow, pity those poor interns who had to sit and manually do this" (or perhaps the work was outsourced to India, who knows?).
A few days before the report was issued, the council emailed to let me know that our college "met the standard" for the report but did not include a copy of the report or an explanation of its methodology. The email did not even state whether meeting the standard meant that the proportion of honor students in teacher preparation was above, below, or at the same level as the university as a whole.
Upon receiving the report this week, I could see that to meet the standard, a teacher-preparation program had to have a proportion of its graduates that was within 10 percentage points of the proportion in the university as a whole. In other words, if 30 percent of students across the university received academic honors, the proportion for the teacher-preparation program had to range between 20 percent and 40 percent in order to meet the council’s standard.
At first glance, this may seem like a reasonable way to measure whether teacher-preparation programs are too lax in their grading standards. But the methodology is fraught with problems:
• The report states that, "Our evaluation of institutions on this standard measures the rigor of their preparation as indicated by the grade point average (GPA) differential between graduating teacher candidates and all other graduating students." In fact, the report does no such thing, because the council did not have access to students’ GPA’s. All it could determine was whether they earned honors or not. For example, here at Michigan State, the GPA cutoff for earning honors is 3.69, and approximately 20 percent of students achieve this level. But in examining student names in the commencement booklet, the council has no way of knowing whether someone who did not achieve honors had a GPA of 3.68 or 2.68, so it is impossible for it to calculate a "GPA differential" between any two groups of students.
• At Michigan State, all secondary-education students must have a major in the discipline in which they wish to teach, and thus, they are not listed as a College of Education student in our commencement booklet. Instead, they are listed with their home college. And in other college listings, there is no way to distinguish those students who are teacher candidates from other students in the same major. Thus, it is impossible to know, outside of graduates in elementary and special education, who all the teacher candidates are in the commencement booklet.
The council fudges on this problem by having an analysis category it defines as "less precise data," and uses different standards for institutions (including Michigan State) that fall into this less-precise data category. To compare between those institutions for which it had "precise data" and those with "less precise data," it examined a subsample of 29 institutions for which it had precise data, and graded them again using the standard for less-precise data. While it found a high correlation between the scores using the two methodologies, it still found that for six of the 29 institutions, the grade was different between one methodology and the other. Thus, these six institutions could potentially have been mislabeled (as having lax academic standards or not) depending on which methodology was used.
• There is an underlying assumption in the council’s methodology that if a teacher-preparation program has too many students graduating with honors, it must be due to grade inflation, rather than the possibility that the students are simply high achieving. Some may snicker at this possibility, but there is nothing in the report that disproves this alternative explanation (or even casts serious doubt on it).
In fact, buried in a footnote in the report is this statement: "For example, while lax grading standards could be part of the cause for the differential in honors, evidence does not suggest that this problem is worse in teacher preparation than elsewhere in an institution." This is a stunning admission that the central thesis of the report—"we find that in a majority of institutions (58 percent), grading standards for teacher candidates are much lower than for students in other majors on the same campus," as stated in the report’s executive summary—is not supported by any evidence.
• The council’s methodology also does not take into account that many education majors, even those in elementary education who are not also majoring in a discipline, take the majority of their courses outside of the teacher-preparation program. In Michigan, in order to be eligible for state certification as an elementary teacher, a student must earn a large number of credit hours in disciplinary courses.
At Michigan State, elementary-education candidates choose a focus area in language arts, social studies, integrated science, or mathematics. A student choosing the integrated-science option, for example, must complete 55 to 58 credit hours in courses from the departments of biology, chemistry, earth science, statistics, and mathematics. This includes required courses such as organismal and population biology, general and inorganic chemistry, and college algebra and trigonometry.
Few would consider those "gut" courses that earn a student an easy A grade—and remember, these are elementary-education majors. So the notion that lax grading in education courses leads to too many students with honors is, statistically, highly implausible.
While all of us in the teacher-preparation industry strive for high standards and are fine with being held accountable for achieving that, we believe that measuring us against those standards must be done in a methodologically sound fashion. The report by the National Council on Teacher Quality fails to do this, and thus should be discounted as a measure of just how many "easy A’s" are earned by education students across the country.
Donald E. Heller is dean of the College of Education and a professor in the department of educational administration at Michigan State University.
- See more at: http://m.chronicle.com/article/Easy-A-s-Gets-an-F/150025/#sthash.Tjyv4KXZ.dpuf
The
National Council on Teacher Quality, a Washington-based think tank, has
issued a number of reports in recent years on teacher preparation
around the country. Its flagship effort since 2013,
the Teacher Prep Review, is an annual report released in June that
rates programs on how well they are preparing new teachers. In order to
keep its name in front of the media between those major annual releases,
the council has issued a series of studies on other aspects of teacher
preparation. The latest one,
"Easy A’s and What’s Behind Them," came out this week. As with the
organization’s other studies, this one has fatal flaws that undermine
most of the conclusions articulated in it.
The study purports to rate whether teacher-preparation programs are lax in grading standards by examining the proportion of students in those programs achieving academic honors (e.g., high honors, or cum laude status) as compared to all undergraduates in the university. To do this, the organization obtained the spring 2012 commencement booklets from approximately 500 colleges and universities and then counted everything up. My first reaction when I heard about the report was, "Wow, pity those poor interns who had to sit and manually do this" (or perhaps the work was outsourced to India, who knows?).
A few days before the report was issued, the council emailed to let me know that our college "met the standard" for the report but did not include a copy of the report or an explanation of its methodology. The email did not even state whether meeting the standard meant that the proportion of honor students in teacher preparation was above, below, or at the same level as the university as a whole.
Upon receiving the report this week, I could see that to meet the standard, a teacher-preparation program had to have a proportion of its graduates that was within 10 percentage points of the proportion in the university as a whole. In other words, if 30 percent of students across the university received academic honors, the proportion for the teacher-preparation program had to range between 20 percent and 40 percent in order to meet the council’s standard.
At first glance, this may seem like a reasonable way to measure whether teacher-preparation programs are too lax in their grading standards. But the methodology is fraught with problems:
• The report states that, "Our evaluation of institutions on this standard measures the rigor of their preparation as indicated by the grade point average (GPA) differential between graduating teacher candidates and all other graduating students." In fact, the report does no such thing, because the council did not have access to students’ GPA’s. All it could determine was whether they earned honors or not. For example, here at Michigan State, the GPA cutoff for earning honors is 3.69, and approximately 20 percent of students achieve this level. But in examining student names in the commencement booklet, the council has no way of knowing whether someone who did not achieve honors had a GPA of 3.68 or 2.68, so it is impossible for it to calculate a "GPA differential" between any two groups of students.
• At Michigan State, all secondary-education students must have a major in the discipline in which they wish to teach, and thus, they are not listed as a College of Education student in our commencement booklet. Instead, they are listed with their home college. And in other college listings, there is no way to distinguish those students who are teacher candidates from other students in the same major. Thus, it is impossible to know, outside of graduates in elementary and special education, who all the teacher candidates are in the commencement booklet.
The council fudges on this problem by having an analysis category it defines as "less precise data," and uses different standards for institutions (including Michigan State) that fall into this less-precise data category. To compare between those institutions for which it had "precise data" and those with "less precise data," it examined a subsample of 29 institutions for which it had precise data, and graded them again using the standard for less-precise data. While it found a high correlation between the scores using the two methodologies, it still found that for six of the 29 institutions, the grade was different between one methodology and the other. Thus, these six institutions could potentially have been mislabeled (as having lax academic standards or not) depending on which methodology was used.
• There is an underlying assumption in the council’s methodology that if a teacher-preparation program has too many students graduating with honors, it must be due to grade inflation, rather than the possibility that the students are simply high achieving. Some may snicker at this possibility, but there is nothing in the report that disproves this alternative explanation (or even casts serious doubt on it).
In fact, buried in a footnote in the report is this statement: "For example, while lax grading standards could be part of the cause for the differential in honors, evidence does not suggest that this problem is worse in teacher preparation than elsewhere in an institution." This is a stunning admission that the central thesis of the report—"we find that in a majority of institutions (58 percent), grading standards for teacher candidates are much lower than for students in other majors on the same campus," as stated in the report’s executive summary—is not supported by any evidence.
• The council’s methodology also does not take into account that many education majors, even those in elementary education who are not also majoring in a discipline, take the majority of their courses outside of the teacher-preparation program. In Michigan, in order to be eligible for state certification as an elementary teacher, a student must earn a large number of credit hours in disciplinary courses.
At Michigan State, elementary-education candidates choose a focus area in language arts, social studies, integrated science, or mathematics. A student choosing the integrated-science option, for example, must complete 55 to 58 credit hours in courses from the departments of biology, chemistry, earth science, statistics, and mathematics. This includes required courses such as organismal and population biology, general and inorganic chemistry, and college algebra and trigonometry.
Few would consider those "gut" courses that earn a student an easy A grade—and remember, these are elementary-education majors. So the notion that lax grading in education courses leads to too many students with honors is, statistically, highly implausible.
While all of us in the teacher-preparation industry strive for high standards and are fine with being held accountable for achieving that, we believe that measuring us against those standards must be done in a methodologically sound fashion. The report by the National Council on Teacher Quality fails to do this, and thus should be discounted as a measure of just how many "easy A’s" are earned by education students across the country.
Donald E. Heller is dean of the College of Education and a professor in the department of educational administration at Michigan State University.
- See more at: http://m.chronicle.com/article/Easy-A-s-Gets-an-F/150025/#sthash.Tjyv4KXZ.dpuf
The study purports to rate whether teacher-preparation programs are lax in grading standards by examining the proportion of students in those programs achieving academic honors (e.g., high honors, or cum laude status) as compared to all undergraduates in the university. To do this, the organization obtained the spring 2012 commencement booklets from approximately 500 colleges and universities and then counted everything up. My first reaction when I heard about the report was, "Wow, pity those poor interns who had to sit and manually do this" (or perhaps the work was outsourced to India, who knows?).
A few days before the report was issued, the council emailed to let me know that our college "met the standard" for the report but did not include a copy of the report or an explanation of its methodology. The email did not even state whether meeting the standard meant that the proportion of honor students in teacher preparation was above, below, or at the same level as the university as a whole.
Upon receiving the report this week, I could see that to meet the standard, a teacher-preparation program had to have a proportion of its graduates that was within 10 percentage points of the proportion in the university as a whole. In other words, if 30 percent of students across the university received academic honors, the proportion for the teacher-preparation program had to range between 20 percent and 40 percent in order to meet the council’s standard.
At first glance, this may seem like a reasonable way to measure whether teacher-preparation programs are too lax in their grading standards. But the methodology is fraught with problems:
• The report states that, "Our evaluation of institutions on this standard measures the rigor of their preparation as indicated by the grade point average (GPA) differential between graduating teacher candidates and all other graduating students." In fact, the report does no such thing, because the council did not have access to students’ GPA’s. All it could determine was whether they earned honors or not. For example, here at Michigan State, the GPA cutoff for earning honors is 3.69, and approximately 20 percent of students achieve this level. But in examining student names in the commencement booklet, the council has no way of knowing whether someone who did not achieve honors had a GPA of 3.68 or 2.68, so it is impossible for it to calculate a "GPA differential" between any two groups of students.
• At Michigan State, all secondary-education students must have a major in the discipline in which they wish to teach, and thus, they are not listed as a College of Education student in our commencement booklet. Instead, they are listed with their home college. And in other college listings, there is no way to distinguish those students who are teacher candidates from other students in the same major. Thus, it is impossible to know, outside of graduates in elementary and special education, who all the teacher candidates are in the commencement booklet.
The council fudges on this problem by having an analysis category it defines as "less precise data," and uses different standards for institutions (including Michigan State) that fall into this less-precise data category. To compare between those institutions for which it had "precise data" and those with "less precise data," it examined a subsample of 29 institutions for which it had precise data, and graded them again using the standard for less-precise data. While it found a high correlation between the scores using the two methodologies, it still found that for six of the 29 institutions, the grade was different between one methodology and the other. Thus, these six institutions could potentially have been mislabeled (as having lax academic standards or not) depending on which methodology was used.
• There is an underlying assumption in the council’s methodology that if a teacher-preparation program has too many students graduating with honors, it must be due to grade inflation, rather than the possibility that the students are simply high achieving. Some may snicker at this possibility, but there is nothing in the report that disproves this alternative explanation (or even casts serious doubt on it).
In fact, buried in a footnote in the report is this statement: "For example, while lax grading standards could be part of the cause for the differential in honors, evidence does not suggest that this problem is worse in teacher preparation than elsewhere in an institution." This is a stunning admission that the central thesis of the report—"we find that in a majority of institutions (58 percent), grading standards for teacher candidates are much lower than for students in other majors on the same campus," as stated in the report’s executive summary—is not supported by any evidence.
• The council’s methodology also does not take into account that many education majors, even those in elementary education who are not also majoring in a discipline, take the majority of their courses outside of the teacher-preparation program. In Michigan, in order to be eligible for state certification as an elementary teacher, a student must earn a large number of credit hours in disciplinary courses.
At Michigan State, elementary-education candidates choose a focus area in language arts, social studies, integrated science, or mathematics. A student choosing the integrated-science option, for example, must complete 55 to 58 credit hours in courses from the departments of biology, chemistry, earth science, statistics, and mathematics. This includes required courses such as organismal and population biology, general and inorganic chemistry, and college algebra and trigonometry.
Few would consider those "gut" courses that earn a student an easy A grade—and remember, these are elementary-education majors. So the notion that lax grading in education courses leads to too many students with honors is, statistically, highly implausible.
While all of us in the teacher-preparation industry strive for high standards and are fine with being held accountable for achieving that, we believe that measuring us against those standards must be done in a methodologically sound fashion. The report by the National Council on Teacher Quality fails to do this, and thus should be discounted as a measure of just how many "easy A’s" are earned by education students across the country.
Donald E. Heller is dean of the College of Education and a professor in the department of educational administration at Michigan State University.
- See more at: http://m.chronicle.com/article/Easy-A-s-Gets-an-F/150025/#sthash.Tjyv4KXZ.dpuf
The National Council on Teacher Quality, a Washington-based think tank, has issued a number of reports in recent years on teacher preparation around the country. Its flagship effort since 2013, the Teacher Prep Review, is an annual report released in June that rates programs on how well they are preparing new teachers. In order to keep its name in front of the media between those major annual releases, the council has issued a series of studies on other aspects of teacher preparation. The latest one, "Easy A’s and What’s Behind Them," came out this week. As with the organization’s other studies, this one has fatal flaws that undermine most of the conclusions articulated in it.
The study purports to rate whether teacher-preparation programs are lax in grading standards by examining the proportion of students in those programs achieving academic honors (e.g., high honors, or cum laude status) as compared to all undergraduates in the university. To do this, the organization obtained the spring 2012 commencement booklets from approximately 500 colleges and universities and then counted everything up. My first reaction when I heard about the report was, "Wow, pity those poor interns who had to sit and manually do this" (or perhaps the work was outsourced to India, who knows?).A few days before the report was issued, the council emailed to let me know that our college "met the standard" for the report but did not include a copy of the report or an explanation of its methodology. The email did not even state whether meeting the standard meant that the proportion of honor students in teacher preparation was above, below, or at the same level as the university as a whole.
Upon receiving the report this week, I could see that to meet the standard, a teacher-preparation program had to have a proportion of its graduates that was within 10 percentage points of the proportion in the university as a whole. In other words, if 30 percent of students across the university received academic honors, the proportion for the teacher-preparation program had to range between 20 percent and 40 percent in order to meet the council’s standard.At first glance, this may seem like a reasonable way to measure whether teacher-preparation programs are too lax in their grading standards. But the methodology is fraught with problems:
• The report states that, "Our evaluation of institutions on this standard measures the rigor of their preparation as indicated by the grade point average (GPA) differential between graduating teacher candidates and all other graduating students." In fact, the report does no such thing, because the council did not have access to students’ GPA’s. All it could determine was whether they earned honors or not. For example, here at Michigan State, the GPA cutoff for earning honors is 3.69, and approximately 20 percent of students achieve this level. But in examining student names in the commencement booklet, the council has no way of knowing whether someone who did not achieve honors had a GPA of 3.68 or 2.68, so it is impossible for it to calculate a "GPA differential" between any two groups of students.• At Michigan State, all secondary-education students must have a major in the discipline in which they wish to teach, and thus, they are not listed as a College of Education student in our commencement booklet. Instead, they are listed with their home college. And in other college listings, there is no way to distinguish those students who are teacher candidates from other students in the same major. Thus, it is impossible to know, outside of graduates in elementary and special education, who all the teacher candidates are in the commencement booklet.The council fudges on this problem by having an analysis category it defines as "less precise data," and uses different standards for institutions (including Michigan State) that fall into this less-precise data category. To compare between those institutions for which it had "precise data" and those with "less precise data," it examined a subsample of 29 institutions for which it had precise data, and graded them again using the standard for less-precise data. While it found a high correlation between the scores using the two methodologies, it still found that for six of the 29 institutions, the grade was different between one methodology and the other. Thus, these six institutions could potentially have been mislabeled (as having lax academic standards or not) depending on which methodology was used.• There is an underlying assumption in the council’s methodology that if a teacher-preparation program has too many students graduating with honors, it must be due to grade inflation, rather than the possibility that the students are simply high achieving. Some may snicker at this possibility, but there is nothing in the report that disproves this alternative explanation (or even casts serious doubt on it).In fact, buried in a footnote in the report is this statement: "For example, while lax grading standards could be part of the cause for the differential in honors, evidence does not suggest that this problem is worse in teacher preparation than elsewhere in an institution." This is a stunning admission that the central thesis of the report—"we find that in a majority of institutions (58 percent), grading standards for teacher candidates are much lower than for students in other majors on the same campus," as stated in the report’s executive summary—is not supported by any evidence.• The council’s methodology also does not take into account that many education majors, even those in elementary education who are not also majoring in a discipline, take the majority of their courses outside of the teacher-preparation program. In Michigan, in order to be eligible for state certification as an elementary teacher, a student must earn a large number of credit hours in disciplinary courses.
Donald E. Heller is dean of the College of Education and a professor in the department of educational administration at Michigan State UniversityAt Michigan State, elementary-education candidates choose a focus area in language arts, social studies, integrated science, or mathematics. A student choosing the integrated-science option, for example, must complete 55 to 58 credit hours in courses from the departments of biology, chemistry, earth science, statistics, and mathematics. This includes required courses such as organismal and population biology, general and inorganic chemistry, and college algebra and trigonometry.Few would consider those "gut" courses that earn a student an easy A grade—and remember, these are elementary-education majors. So the notion that lax grading in education courses leads to too many students with honors is, statistically, highly implausible.While all of us in the teacher-preparation industry strive for high standards and are fine with being held accountable for achieving that, we believe that measuring us against those standards must be done in a methodologically sound fashion. The report by the National Council on Teacher Quality fails to do this, and thus should be discounted as a measure of just how many "easy A’s" are earned by education students across the country.
The
National Council on Teacher Quality, a Washington-based think tank, has
issued a number of reports in recent years on teacher preparation
around the country. Its flagship effort since 2013,
the Teacher Prep Review, is an annual report released in June that
rates programs on how well they are preparing new teachers. In order to
keep its name in front of the media between those major annual releases,
the council has issued a series of studies on other aspects of teacher
preparation. The latest one,
"Easy A’s and What’s Behind Them," came out this week. As with the
organization’s other studies, this one has fatal flaws that undermine
most of the conclusions articulated in it.
The study purports to rate whether teacher-preparation programs are lax in grading standards by examining the proportion of students in those programs achieving academic honors (e.g., high honors, or cum laude status) as compared to all undergraduates in the university. To do this, the organization obtained the spring 2012 commencement booklets from approximately 500 colleges and universities and then counted everything up. My first reaction when I heard about the report was, "Wow, pity those poor interns who had to sit and manually do this" (or perhaps the work was outsourced to India, who knows?).
A few days before the report was issued, the council emailed to let me know that our college "met the standard" for the report but did not include a copy of the report or an explanation of its methodology. The email did not even state whether meeting the standard meant that the proportion of honor students in teacher preparation was above, below, or at the same level as the university as a whole.
Upon receiving the report this week, I could see that to meet the standard, a teacher-preparation program had to have a proportion of its graduates that was within 10 percentage points of the proportion in the university as a whole. In other words, if 30 percent of students across the university received academic honors, the proportion for the teacher-preparation program had to range between 20 percent and 40 percent in order to meet the council’s standard.
At first glance, this may seem like a reasonable way to measure whether teacher-preparation programs are too lax in their grading standards. But the methodology is fraught with problems:
• The report states that, "Our evaluation of institutions on this standard measures the rigor of their preparation as indicated by the grade point average (GPA) differential between graduating teacher candidates and all other graduating students." In fact, the report does no such thing, because the council did not have access to students’ GPA’s. All it could determine was whether they earned honors or not. For example, here at Michigan State, the GPA cutoff for earning honors is 3.69, and approximately 20 percent of students achieve this level. But in examining student names in the commencement booklet, the council has no way of knowing whether someone who did not achieve honors had a GPA of 3.68 or 2.68, so it is impossible for it to calculate a "GPA differential" between any two groups of students.
• At Michigan State, all secondary-education students must have a major in the discipline in which they wish to teach, and thus, they are not listed as a College of Education student in our commencement booklet. Instead, they are listed with their home college. And in other college listings, there is no way to distinguish those students who are teacher candidates from other students in the same major. Thus, it is impossible to know, outside of graduates in elementary and special education, who all the teacher candidates are in the commencement booklet.
The council fudges on this problem by having an analysis category it defines as "less precise data," and uses different standards for institutions (including Michigan State) that fall into this less-precise data category. To compare between those institutions for which it had "precise data" and those with "less precise data," it examined a subsample of 29 institutions for which it had precise data, and graded them again using the standard for less-precise data. While it found a high correlation between the scores using the two methodologies, it still found that for six of the 29 institutions, the grade was different between one methodology and the other. Thus, these six institutions could potentially have been mislabeled (as having lax academic standards or not) depending on which methodology was used.
• There is an underlying assumption in the council’s methodology that if a teacher-preparation program has too many students graduating with honors, it must be due to grade inflation, rather than the possibility that the students are simply high achieving. Some may snicker at this possibility, but there is nothing in the report that disproves this alternative explanation (or even casts serious doubt on it).
In fact, buried in a footnote in the report is this statement: "For example, while lax grading standards could be part of the cause for the differential in honors, evidence does not suggest that this problem is worse in teacher preparation than elsewhere in an institution." This is a stunning admission that the central thesis of the report—"we find that in a majority of institutions (58 percent), grading standards for teacher candidates are much lower than for students in other majors on the same campus," as stated in the report’s executive summary—is not supported by any evidence.
• The council’s methodology also does not take into account that many education majors, even those in elementary education who are not also majoring in a discipline, take the majority of their courses outside of the teacher-preparation program. In Michigan, in order to be eligible for state certification as an elementary teacher, a student must earn a large number of credit hours in disciplinary courses.
At Michigan State, elementary-education candidates choose a focus area in language arts, social studies, integrated science, or mathematics. A student choosing the integrated-science option, for example, must complete 55 to 58 credit hours in courses from the departments of biology, chemistry, earth science, statistics, and mathematics. This includes required courses such as organismal and population biology, general and inorganic chemistry, and college algebra and trigonometry.
Few would consider those "gut" courses that earn a student an easy A grade—and remember, these are elementary-education majors. So the notion that lax grading in education courses leads to too many students with honors is, statistically, highly implausible.
While all of us in the teacher-preparation industry strive for high standards and are fine with being held accountable for achieving that, we believe that measuring us against those standards must be done in a methodologically sound fashion. The report by the National Council on Teacher Quality fails to do this, and thus should be discounted as a measure of just how many "easy A’s" are earned by education students across the country.
Donald E. Heller is dean of the College of Education and a professor in the department of educational administration at Michigan State University.
- See more at: http://m.chronicle.com/article/Easy-A-s-Gets-an-F/150025/#sthash.Tjyv4KXZ.dpuf
The study purports to rate whether teacher-preparation programs are lax in grading standards by examining the proportion of students in those programs achieving academic honors (e.g., high honors, or cum laude status) as compared to all undergraduates in the university. To do this, the organization obtained the spring 2012 commencement booklets from approximately 500 colleges and universities and then counted everything up. My first reaction when I heard about the report was, "Wow, pity those poor interns who had to sit and manually do this" (or perhaps the work was outsourced to India, who knows?).
A few days before the report was issued, the council emailed to let me know that our college "met the standard" for the report but did not include a copy of the report or an explanation of its methodology. The email did not even state whether meeting the standard meant that the proportion of honor students in teacher preparation was above, below, or at the same level as the university as a whole.
Upon receiving the report this week, I could see that to meet the standard, a teacher-preparation program had to have a proportion of its graduates that was within 10 percentage points of the proportion in the university as a whole. In other words, if 30 percent of students across the university received academic honors, the proportion for the teacher-preparation program had to range between 20 percent and 40 percent in order to meet the council’s standard.
At first glance, this may seem like a reasonable way to measure whether teacher-preparation programs are too lax in their grading standards. But the methodology is fraught with problems:
• The report states that, "Our evaluation of institutions on this standard measures the rigor of their preparation as indicated by the grade point average (GPA) differential between graduating teacher candidates and all other graduating students." In fact, the report does no such thing, because the council did not have access to students’ GPA’s. All it could determine was whether they earned honors or not. For example, here at Michigan State, the GPA cutoff for earning honors is 3.69, and approximately 20 percent of students achieve this level. But in examining student names in the commencement booklet, the council has no way of knowing whether someone who did not achieve honors had a GPA of 3.68 or 2.68, so it is impossible for it to calculate a "GPA differential" between any two groups of students.
• At Michigan State, all secondary-education students must have a major in the discipline in which they wish to teach, and thus, they are not listed as a College of Education student in our commencement booklet. Instead, they are listed with their home college. And in other college listings, there is no way to distinguish those students who are teacher candidates from other students in the same major. Thus, it is impossible to know, outside of graduates in elementary and special education, who all the teacher candidates are in the commencement booklet.
The council fudges on this problem by having an analysis category it defines as "less precise data," and uses different standards for institutions (including Michigan State) that fall into this less-precise data category. To compare between those institutions for which it had "precise data" and those with "less precise data," it examined a subsample of 29 institutions for which it had precise data, and graded them again using the standard for less-precise data. While it found a high correlation between the scores using the two methodologies, it still found that for six of the 29 institutions, the grade was different between one methodology and the other. Thus, these six institutions could potentially have been mislabeled (as having lax academic standards or not) depending on which methodology was used.
• There is an underlying assumption in the council’s methodology that if a teacher-preparation program has too many students graduating with honors, it must be due to grade inflation, rather than the possibility that the students are simply high achieving. Some may snicker at this possibility, but there is nothing in the report that disproves this alternative explanation (or even casts serious doubt on it).
In fact, buried in a footnote in the report is this statement: "For example, while lax grading standards could be part of the cause for the differential in honors, evidence does not suggest that this problem is worse in teacher preparation than elsewhere in an institution." This is a stunning admission that the central thesis of the report—"we find that in a majority of institutions (58 percent), grading standards for teacher candidates are much lower than for students in other majors on the same campus," as stated in the report’s executive summary—is not supported by any evidence.
• The council’s methodology also does not take into account that many education majors, even those in elementary education who are not also majoring in a discipline, take the majority of their courses outside of the teacher-preparation program. In Michigan, in order to be eligible for state certification as an elementary teacher, a student must earn a large number of credit hours in disciplinary courses.
At Michigan State, elementary-education candidates choose a focus area in language arts, social studies, integrated science, or mathematics. A student choosing the integrated-science option, for example, must complete 55 to 58 credit hours in courses from the departments of biology, chemistry, earth science, statistics, and mathematics. This includes required courses such as organismal and population biology, general and inorganic chemistry, and college algebra and trigonometry.
Few would consider those "gut" courses that earn a student an easy A grade—and remember, these are elementary-education majors. So the notion that lax grading in education courses leads to too many students with honors is, statistically, highly implausible.
While all of us in the teacher-preparation industry strive for high standards and are fine with being held accountable for achieving that, we believe that measuring us against those standards must be done in a methodologically sound fashion. The report by the National Council on Teacher Quality fails to do this, and thus should be discounted as a measure of just how many "easy A’s" are earned by education students across the country.
Donald E. Heller is dean of the College of Education and a professor in the department of educational administration at Michigan State University.
- See more at: http://m.chronicle.com/article/Easy-A-s-Gets-an-F/150025/#sthash.Tjyv4KXZ.dpuf
The
National Council on Teacher Quality, a Washington-based think tank, has
issued a number of reports in recent years on teacher preparation
around the country. Its flagship effort since 2013,
the Teacher Prep Review, is an annual report released in June that
rates programs on how well they are preparing new teachers. In order to
keep its name in front of the media between those major annual releases,
the council has issued a series of studies on other aspects of teacher
preparation. The latest one,
"Easy A’s and What’s Behind Them," came out this week. As with the
organization’s other studies, this one has fatal flaws that undermine
most of the conclusions articulated in it.
The study purports to rate whether teacher-preparation programs are lax in grading standards by examining the proportion of students in those programs achieving academic honors (e.g., high honors, or cum laude status) as compared to all undergraduates in the university. To do this, the organization obtained the spring 2012 commencement booklets from approximately 500 colleges and universities and then counted everything up. My first reaction when I heard about the report was, "Wow, pity those poor interns who had to sit and manually do this" (or perhaps the work was outsourced to India, who knows?).
A few days before the report was issued, the council emailed to let me know that our college "met the standard" for the report but did not include a copy of the report or an explanation of its methodology. The email did not even state whether meeting the standard meant that the proportion of honor students in teacher preparation was above, below, or at the same level as the university as a whole.
Upon receiving the report this week, I could see that to meet the standard, a teacher-preparation program had to have a proportion of its graduates that was within 10 percentage points of the proportion in the university as a whole. In other words, if 30 percent of students across the university received academic honors, the proportion for the teacher-preparation program had to range between 20 percent and 40 percent in order to meet the council’s standard.
At first glance, this may seem like a reasonable way to measure whether teacher-preparation programs are too lax in their grading standards. But the methodology is fraught with problems:
• The report states that, "Our evaluation of institutions on this standard measures the rigor of their preparation as indicated by the grade point average (GPA) differential between graduating teacher candidates and all other graduating students." In fact, the report does no such thing, because the council did not have access to students’ GPA’s. All it could determine was whether they earned honors or not. For example, here at Michigan State, the GPA cutoff for earning honors is 3.69, and approximately 20 percent of students achieve this level. But in examining student names in the commencement booklet, the council has no way of knowing whether someone who did not achieve honors had a GPA of 3.68 or 2.68, so it is impossible for it to calculate a "GPA differential" between any two groups of students.
• At Michigan State, all secondary-education students must have a major in the discipline in which they wish to teach, and thus, they are not listed as a College of Education student in our commencement booklet. Instead, they are listed with their home college. And in other college listings, there is no way to distinguish those students who are teacher candidates from other students in the same major. Thus, it is impossible to know, outside of graduates in elementary and special education, who all the teacher candidates are in the commencement booklet.
The council fudges on this problem by having an analysis category it defines as "less precise data," and uses different standards for institutions (including Michigan State) that fall into this less-precise data category. To compare between those institutions for which it had "precise data" and those with "less precise data," it examined a subsample of 29 institutions for which it had precise data, and graded them again using the standard for less-precise data. While it found a high correlation between the scores using the two methodologies, it still found that for six of the 29 institutions, the grade was different between one methodology and the other. Thus, these six institutions could potentially have been mislabeled (as having lax academic standards or not) depending on which methodology was used.
• There is an underlying assumption in the council’s methodology that if a teacher-preparation program has too many students graduating with honors, it must be due to grade inflation, rather than the possibility that the students are simply high achieving. Some may snicker at this possibility, but there is nothing in the report that disproves this alternative explanation (or even casts serious doubt on it).
In fact, buried in a footnote in the report is this statement: "For example, while lax grading standards could be part of the cause for the differential in honors, evidence does not suggest that this problem is worse in teacher preparation than elsewhere in an institution." This is a stunning admission that the central thesis of the report—"we find that in a majority of institutions (58 percent), grading standards for teacher candidates are much lower than for students in other majors on the same campus," as stated in the report’s executive summary—is not supported by any evidence.
• The council’s methodology also does not take into account that many education majors, even those in elementary education who are not also majoring in a discipline, take the majority of their courses outside of the teacher-preparation program. In Michigan, in order to be eligible for state certification as an elementary teacher, a student must earn a large number of credit hours in disciplinary courses.
At Michigan State, elementary-education candidates choose a focus area in language arts, social studies, integrated science, or mathematics. A student choosing the integrated-science option, for example, must complete 55 to 58 credit hours in courses from the departments of biology, chemistry, earth science, statistics, and mathematics. This includes required courses such as organismal and population biology, general and inorganic chemistry, and college algebra and trigonometry.
Few would consider those "gut" courses that earn a student an easy A grade—and remember, these are elementary-education majors. So the notion that lax grading in education courses leads to too many students with honors is, statistically, highly implausible.
While all of us in the teacher-preparation industry strive for high standards and are fine with being held accountable for achieving that, we believe that measuring us against those standards must be done in a methodologically sound fashion. The report by the National Council on Teacher Quality fails to do this, and thus should be discounted as a measure of just how many "easy A’s" are earned by education students across the country.
Donald E. Heller is dean of the College of Education and a professor in the department of educational administration at Michigan State University.
- See more at: http://m.chronicle.com/article/Easy-A-s-Gets-an-F/150025/#sthash.Tjyv4KXZ.dpuf
The study purports to rate whether teacher-preparation programs are lax in grading standards by examining the proportion of students in those programs achieving academic honors (e.g., high honors, or cum laude status) as compared to all undergraduates in the university. To do this, the organization obtained the spring 2012 commencement booklets from approximately 500 colleges and universities and then counted everything up. My first reaction when I heard about the report was, "Wow, pity those poor interns who had to sit and manually do this" (or perhaps the work was outsourced to India, who knows?).
A few days before the report was issued, the council emailed to let me know that our college "met the standard" for the report but did not include a copy of the report or an explanation of its methodology. The email did not even state whether meeting the standard meant that the proportion of honor students in teacher preparation was above, below, or at the same level as the university as a whole.
Upon receiving the report this week, I could see that to meet the standard, a teacher-preparation program had to have a proportion of its graduates that was within 10 percentage points of the proportion in the university as a whole. In other words, if 30 percent of students across the university received academic honors, the proportion for the teacher-preparation program had to range between 20 percent and 40 percent in order to meet the council’s standard.
At first glance, this may seem like a reasonable way to measure whether teacher-preparation programs are too lax in their grading standards. But the methodology is fraught with problems:
• The report states that, "Our evaluation of institutions on this standard measures the rigor of their preparation as indicated by the grade point average (GPA) differential between graduating teacher candidates and all other graduating students." In fact, the report does no such thing, because the council did not have access to students’ GPA’s. All it could determine was whether they earned honors or not. For example, here at Michigan State, the GPA cutoff for earning honors is 3.69, and approximately 20 percent of students achieve this level. But in examining student names in the commencement booklet, the council has no way of knowing whether someone who did not achieve honors had a GPA of 3.68 or 2.68, so it is impossible for it to calculate a "GPA differential" between any two groups of students.
• At Michigan State, all secondary-education students must have a major in the discipline in which they wish to teach, and thus, they are not listed as a College of Education student in our commencement booklet. Instead, they are listed with their home college. And in other college listings, there is no way to distinguish those students who are teacher candidates from other students in the same major. Thus, it is impossible to know, outside of graduates in elementary and special education, who all the teacher candidates are in the commencement booklet.
The council fudges on this problem by having an analysis category it defines as "less precise data," and uses different standards for institutions (including Michigan State) that fall into this less-precise data category. To compare between those institutions for which it had "precise data" and those with "less precise data," it examined a subsample of 29 institutions for which it had precise data, and graded them again using the standard for less-precise data. While it found a high correlation between the scores using the two methodologies, it still found that for six of the 29 institutions, the grade was different between one methodology and the other. Thus, these six institutions could potentially have been mislabeled (as having lax academic standards or not) depending on which methodology was used.
• There is an underlying assumption in the council’s methodology that if a teacher-preparation program has too many students graduating with honors, it must be due to grade inflation, rather than the possibility that the students are simply high achieving. Some may snicker at this possibility, but there is nothing in the report that disproves this alternative explanation (or even casts serious doubt on it).
In fact, buried in a footnote in the report is this statement: "For example, while lax grading standards could be part of the cause for the differential in honors, evidence does not suggest that this problem is worse in teacher preparation than elsewhere in an institution." This is a stunning admission that the central thesis of the report—"we find that in a majority of institutions (58 percent), grading standards for teacher candidates are much lower than for students in other majors on the same campus," as stated in the report’s executive summary—is not supported by any evidence.
• The council’s methodology also does not take into account that many education majors, even those in elementary education who are not also majoring in a discipline, take the majority of their courses outside of the teacher-preparation program. In Michigan, in order to be eligible for state certification as an elementary teacher, a student must earn a large number of credit hours in disciplinary courses.
At Michigan State, elementary-education candidates choose a focus area in language arts, social studies, integrated science, or mathematics. A student choosing the integrated-science option, for example, must complete 55 to 58 credit hours in courses from the departments of biology, chemistry, earth science, statistics, and mathematics. This includes required courses such as organismal and population biology, general and inorganic chemistry, and college algebra and trigonometry.
Few would consider those "gut" courses that earn a student an easy A grade—and remember, these are elementary-education majors. So the notion that lax grading in education courses leads to too many students with honors is, statistically, highly implausible.
While all of us in the teacher-preparation industry strive for high standards and are fine with being held accountable for achieving that, we believe that measuring us against those standards must be done in a methodologically sound fashion. The report by the National Council on Teacher Quality fails to do this, and thus should be discounted as a measure of just how many "easy A’s" are earned by education students across the country.
Donald E. Heller is dean of the College of Education and a professor in the department of educational administration at Michigan State University.
- See more at: http://m.chronicle.com/article/Easy-A-s-Gets-an-F/150025/#sthash.Tjyv4KXZ.dpuf
1 comment:
Though we were supposed to be divesting ourselves of grades and moving toward standards based evaluation/assessment.
I get tired of this inflated grade argument. Maybe the way we instruct and evaluate has changed and made formal grading obsolete. Just because I got an "A" in middle school back in the 1970's because I could crank out 50 correct multiplication problems in 3 minutes or accurately memorize and regurgitate on a written test the all 50 state capitals doesn't necessarily mean that I learned something more of greater value.
Post a Comment