## Fewer Students Learning Arithmetic and Algebra

by Jerome Dancis

This summer, I obtained the college remediation data for my state of Maryland. Well just 2014, the latest available. So BCC i.e. before Common Core became the state tests in Maryland.

Does anyone know of similar data for other states?

Fewer Students Learning Arithmetic and Algebra

Analysis based on data by Maryland Higher Education Commission’s (MHEC) Student Outcome and Achievement Report (SOAR).

The data for my state of Maryland (MD) is: (This data may be typical for many of the 45 states, which adapted the NCTM Standards.)

Decline in Percent of Freshmen Entering Colleges in Maryland, Who Knew Arithmetic and Real High School Algebra I.

1998        2005        2006        2014

Whites                                  67%         60%         58%          64%

African-Americans            44%         33%         36%          37%

Hispanics                            56%         42%         43%          44%

See my [Univ. of Maryland] Faculty Voice article,

scroll down to bottom of Page 1

Caveat. This data describes only those graduates of Maryland high schools in 1998, 2005, 2006 and 2014, who entered a college in Maryland the same year.

Related Data. From 1998 to 2005, the number of white graduates increased by 11% (from 14,473 to 16,127), but the number who knew arithmetic and high school algebra I decreased (from 9703 to 9619) (as determined by college placement tests).

Similarly, from 1998 to 2005, the number of African-American graduates who were minimally ready for college Math went down in spite of increased college enrollments of females by 21% and males by 31%.

One of the likely causes for the downturn: High school Algebra I used to be the Algebra course colleges expected. Under the specter of the MD School Assessments (MSAs) and High School Assessments (HSAs), school administrators have been bending the instructional programs out of shape in order to teach to the state tests. The MSAs on math and the MD Voluntary Math Curriculum marginalizes Arithmetic, thereby not allocating sufficient time for too many students to learn Arithmetic. Arithmetic lessons were largely Arithmetic with calculator. The MD HSA on Algebra was Algebra with graphing calculator. The MD HSA on Algebra avoided the arithmetic and arithmetic-based Algebra students would need in college, such as knowing that 3x + 2x = 5x and knowing 9×8 = 72. I nick-named it The MD HSA on “Pretend Algebra” .

Fewer Students Learning Arithmetic and Algebra was originally published on Nonpartisan Education Blog

Fewer Students Learning Arithmetic and Algebra was originally published on Nonpartisan Education Blog

## Cognitive Science and the Common Core

New in the Nonpartisan Education Review:

Cognitive Science and the Common Core Mathematics Standards

by Eric A. Nelson

Abstract

Between 1995 and 2010, most U.S. states adopted K–12 math standards which discouraged memorization of math facts and procedures.  Since 2010, most states have revised standards to align with the K–12 Common Core Mathematics Standards (CCMS).  The CCMS do not ask students to memorize facts and procedures for some key topics and delay work with memorized fundamentals in others.

Recent research in cognitive science has found that the brain has only minimal ability to reason with knowledge that has not previously been well-memorized.  This science predicts that students taught under math standards that discouraged initial memorization for math topics will have significant difficulty solving numeric problems in mathematics, science, and engineering.  As one test of this prediction, in a recent OECD assessment of numeracy skills among 22 developed-world nations, U.S. 16–24 year olds ranked dead last.  Discussion will include steps that can be taken to align K–12 state standards with practices supported by cognitive research.

Cognitive Science and the Common Core was originally published on Nonpartisan Education Blog

Cognitive Science and the Common Core was originally published on Nonpartisan Education Blog

## Significance of PISA math results

A new round of two international comparisons of student mathematics performance came out recently and there was a lot of interest because the reports were almost simultaneous, TIMSS[1] in late November 2016 and PISA[2] just a week later. They are often reported as 2015 instead of 2016 because the data collection for each was in late 2015 that would seem to improve the comparison even more. In fact, no comparison is appropriate; they are completely different instruments and, between them, the TIMSS is the one that should be of more concern to educators. Perhaps surprising and with great room for improvement, the US performance is not as dire as the PISA results would imply. By contrast, Finland continues to demonstrate that its internationally recognized record of PISA-proven success in mathematics education – with its widely applauded, student-friendly approach – is completely misinforming.

In spite of the popular press and mathematics education folklore, Finland’s performance has been known to be overrated since PISA first came out as documented by an open letter[3] written by the president of the Finnish Mathematical Society and cosigned by many mathematicians and experts in other math-based disciplines:

“The PISA survey tells only a partial truth of Finnish children’s mathematical skills” “in fact the mathematical knowledge of new students has declined dramatically”

This letter links to a description[4] of the most fundamental problem that directly involves elementary mathematics education:

“Severe shortcomings in Finnish mathematics skills” “If one does not know how to handle fractions, one is not able to know algebra”

The previous TIMSS had the 4th grade performance of Finland as a bit above that of the US but well behind by 8th. In the new report, it has slipped below the US at 4th and did not even submit itself to be assessed at 8th much less the Advanced level. Similar remarks apply to another country often recognized for its student-friendly mathematics education, the Netherlands, home of the PISA at the Freudenthal Institute. This decline was recognized in the TIMSS summary of student performance[1]with the comparative grade-level rankings as Exhibits 1.1 and 1.2 with the Advanced[5] as Exhibit M1.1:

By contrast, PISA[2] came out a week later and…

Netherlands 11
Finland 13
United States 41

Note: These include China* (just below Japan) of 3 provinces, not the country – if omitted, subtract 1.

Why the difference? The problem is that PISA was never for “school mathematics” but for all 15-year-old students in regard to their “mathematics literacy[6]”, not even mathematics at the algebra level needed for non-remedial admission to college much less the TIMSS Advanced level interpreted as AP or IB Calculus in the US:

“PISA is the U.S. source for internationally comparative information on the mathematical and scientific literacy of students in the upper grades at an age that, for most countries, is near the end of compulsory schooling. The objective of PISA is to measure the “yield” of education systems, or what skills and competencies students have acquired and can apply in these subjects to real-world contexts by age 15. The literacy concept emphasizes the mastery of processes, understanding of concepts, and application of knowledge and functioning in various situations within domains. By focusing on literacy, PISA draws not only from school curricula but also from learning that may occur outside of school.”

Historically relevant is the fact that conception of PISA at the Freudenthal Institute in the Netherlands included heavy guidance from Thomas Romberg of the University of Wisconsin’s WCER and the original creator of the middle school math ed curriculum MiC, Mathematics in Context. Its underlying philosophy is exactly that of PISA, the study of mathematics through everyday applications that do not require the development of the more sophisticated mathematics that opens the doors for deeper study in mathematics; i.e., all mildly sophisticated math-based career opportunities, so-called STEM careers. In point of fact, the arithmetic of the PISA applications is calculator-friendly so even elementary arithmetic through ordinary fractions – so necessary for eventual algebra – need not be developed to score well.

[1] http://timss2015.org/timss-2015/mathematics/student-achievement/
[2] http://nces.ed.gov/pubs2017/2017048.pdf (Table 3, page 23)
[3] http://matematiikkalehtisolmu.fi/2005/erik/PisaEng.html
[4] http://matematiikkalehtisolmu.fi/2005/erik/KivTarEng.html
[5] http://timss2015.org/advanced/ [Distribution of Advanced Mathematics Achievement]
[6] https://nces.ed.gov/timss/pdf/naep_timss_pisa_comp.pdf

Wayne Bishop, PhD
Professor of Mathematics, Emeritus
California State University, LA

Significance of PISA math results was originally published on Nonpartisan Education Blog

Significance of PISA math results was originally published on Nonpartisan Education Blog

## Fordham Institute’s pretend research

The Thomas B. Fordham Institute has released a report, Evaluating the Content and Quality of Next Generation Assessments,[i] ostensibly an evaluative comparison of four testing programs, the Common Core-derived SBAC and PARCC, ACT’s Aspire, and the Commonwealth of Massachusetts’ MCAS.[ii] Of course, anyone familiar with Fordham’s past work knew beforehand which tests would win.

This latest Fordham Institute Common Core apologia is not so much research as a caricature of it.

1. Instead of referencing a wide range of relevant research, Fordham references only friends from inside their echo chamber and others paid by the Common Core’s wealthy benefactors. But, they imply that they have covered a relevant and adequately wide range of sources.
2. Instead of evaluating tests according to the industry standard Standards for Educational and Psychological Testing, or any of dozens of other freely-available and well-vetted test evaluation standards, guidelines, or protocols used around the world by testing experts, they employ “a brand new methodology” specifically developed for Common Core, for the owners of the Common Core, and paid for by Common Core’s funders.
3. Instead of suggesting as fact only that which has been rigorously evaluated and accepted as fact by skeptics, the authors continue the practice of Common Core salespeople of attributing benefits to their tests for which no evidence exists
4. Instead of addressing any of the many sincere, profound critiques of their work, as confident and responsible researchers would do, the Fordham authors tell their critics to go away—“If you don’t care for the standards…you should probably ignore this study” (p. 4).
5. Instead of writing in neutral language as real researchers do, the authors adopt the practice of coloring their language as so many Common Core salespeople do, attaching nice-sounding adjectives and adverbs to what serves their interest, and bad-sounding words to what does not.

1.  Common Core’s primary private financier, the Bill & Melinda Gates Foundation, pays the Fordham Institute handsomely to promote the Core and its associated testing programs.[iii] A cursory search through the Gates Foundation web site reveals \$3,562,116 granted to Fordham since 2009 expressly for Common Core promotion or “general operating support.”[iv] Gates awarded an additional \$653,534 between 2006 and 2009 for forming advocacy networks, which have since been used to push Common Core. All of the remaining Gates-to-Fordham grants listed supported work promoting charter schools in Ohio (\$2,596,812), reputedly the nation’s worst.[v]

The other research entities involved in the latest Fordham study either directly or indirectly derive sustenance at the Gates Foundation dinner table:

• the Human Resources Research Organization (HumRRO),[vi]
• the Council of Chief State School Officers (CCSSO), co-holder of the Common Core copyright and author of the test evaluation “Criteria.”[vii]
• the Stanford Center for Opportunity Policy in Education (SCOPE), headed by Linda Darling-Hammond, the chief organizer of one of the federally-subsidized Common Core-aligned testing programs, the Smarter-Balanced Assessment Consortium (SBAC),[viii] and
• Student Achievement Partners, the organization that claims to have inspired the Common Core standards[ix]

The Common Core’s grandees have always only hired their own well-subsidized grantees for evaluations of their products. The Buros Center for Testing at the University of Nebraska has conducted test reviews for decades, publishing many of them in its annual Mental Measurements Yearbook for the entire world to see, and critique. Indeed, Buros exists to conduct test reviews, and retains hundreds of the world’s brightest and most independent psychometricians on its reviewer roster. Why did Common Core’s funders not hire genuine professionals from Buros to evaluate PARCC and SBAC? The non-psychometricians at the Fordham Institute would seem a vastly inferior substitute, …that is, had the purpose genuinely been an objective evaluation.

2.  A second reason Fordham’s intentions are suspect rests with their choice of evaluation criteria. The “bible” of North American testing experts is the Standards for Educational and Psychological Testing, jointly produced by the American Psychological Association, National Council on Measurement in Education, and the American Educational Research Association. Fordham did not use it.[x]

Had Fordham compared the tests using the Standards for Educational and Psychological Testing (or any of a number of other widely-respected test evaluation standards, guidelines, or protocols[xi]) SBAC and PARCC would have flunked. They have yet to accumulate some the most basic empirical evidence of reliability, validity, or fairness, and past experience with similar types of assessments suggest they will fail on all three counts.[xii]

Instead, Fordham chose to reference an alternate set of evaluation criteria concocted by the organization that co-owns the Common Core standards and co-sponsored their development (Council of Chief State School Officers, or CCSSO), drawing on the work of Linda Darling-Hammond’s SCOPE, the Center for Research on Educational Standards and Student Testing (CRESST), and a handful of others.[xiii],[xiv] Thus, Fordham compares SBAC and PARCC to other tests according to specifications that were designed for SBAC and PARCC.[xv]

The authors write “The quality and credibility of an evaluation of this type rests largely on the expertise and judgment of the individuals serving on the review panels” (p.12). A scan of the names of everyone in decision-making roles, however, reveals that Fordham relied on those they have hired before and whose decisions they could safely predict. Regardless, given the evaluation criteria employed, the outcome was foreordained regardless whom they hired to review, not unlike a rigged election in a dictatorship where voters’ decisions are restricted to already-chosen candidates.

Still, PARCC and SBAC might have flunked even if Fordham had compared tests using all 24+ of CCSSO’s “Criteria.” But Fordham chose to compare on only 14 of the criteria.[xvi] And those just happened to be criteria mostly favoring PARCC and SBAC.

Without exception the Fordham study avoided all the evaluation criteria in the categories:

“Meet overall assessment goals and ensure technical quality”,

“Yield valuable reports on student progress and performance”,

“Adhere to best practices in test administration”, and

“State specific criteria”[xvii]

What types of test characteristics can be found in these neglected categories? Test security, providing timely data to inform instruction, validity, reliability, score comparability across years, transparency of test design, requiring involvement of each state’s K-12 educators and institutions of higher education, and more. Other characteristics often claimed for PARCC and SBAC, without evidence, cannot even be found in the CCSSO criteria (e.g., internationally benchmarked, backward mapping from higher education standards, fairness).

The report does not evaluate the “quality” of tests, as its title suggests; at best it is an alignment study. And, naturally, one would expect the Common Core consortium tests to be more aligned to the Common Core than other tests. The only evaluative criteria used from the CCSSO’s Criteria are in the two categories “Align to Standards—English Language Arts” and “Align to Standards—Mathematics” and, even then, only for grades 5 and 8.

Nonetheless, the authors claim, “The methodology used in this study is highly comprehensive” (p. 74).

The authors of the Pioneer Institute’s report How PARCC’s false rigor stunts the academic growth of all students,[xviii] recommended strongly against the official adoption of PARCC after an analysis of its test items in reading and writing. They also did not recommend continuing with the current MCAS, which is also based on Common Core’s mediocre standards, chiefly because the quality of the grade 10 MCAS tests in math and ELA has deteriorated in the past seven or so years for reasons that are not yet clear. Rather, they recommend that Massachusetts return to its effective pre-Common Core standards and tests and assign the development and monitoring of the state’s mandated tests to a more responsible agency.

Perhaps the primary conceit of Common Core proponents is that the familiar multiple-choice/short answer/essay standardized tests ignore some, and arguably the better, parts of learning (the deeper, higher, more rigorous, whatever)[xix]. Ironically, it is they—opponents of traditional testing content and formats—who propose that standardized tests measure everything. By contrast, most traditional standardized test advocates do not suggest that standardized tests can or should measure any and all aspects of learning.

Consider this standard from the Linda Darling-Hammond, et al. source document for the CCSSO criteria:

”Research: Conduct sustained research projects to answer a question (including a self-generated question) or solve a problem, narrow or broaden the inquiry when appropriate, and demonstrate understanding of the subject under investigation. Gather relevant information from multiple authoritative print and digital sources, use advanced searches effectively, and assess the strengths and limitations of each source in terms of the specific task, purpose, and audience.”[xx]

Who would oppose this as a learning objective? But, does it make sense as a standardized test component? How does one objectively and fairly measure “sustained research” in the one- or two-minute span of a standardized test question? In PARCC tests, this is simulated by offering students snippets of documentary source material and grading them as having analyzed the problem well if they cite two of those already-made-available sources.

But, that is not how research works. It is hardly the type of deliberation that comes to most people’s mind when they think about “sustained research”. Advocates for traditional standardized testing would argue that standardized tests should be used for what standardized tests do well; “sustained research” should be measured more authentically.

The authors of the aforementioned Pioneer Institute report recommend, as their 7th policy recommendation for Massachusetts:

“Establish a junior/senior-year interdisciplinary research paper requirement as part of the state’s graduation requirements—to be assessed at the local level following state guidelines—to prepare all students for authentic college writing.”[xxi]

PARCC, SBAC, and the Fordham Institute propose that they can validly, reliably, and fairly measure the outcome of what is normally a weeks- or months-long project in a minute or two. It is attempting to measure that which cannot be well measured on standardized tests that makes PARCC and SBAC tests “deeper” than others. In practice, the alleged deeper parts are the most convoluted and superficial.

Appendix A of the source document for the CCSSO criteria provides three international examples of “high-quality assessments” in Singapore, Australia, and England.[xxiii] None are standardized test components. Rather, all are projects developed over extended periods of time—weeks or months—as part of regular course requirements.

Common Core proponents scoured the globe to locate “international benchmark” examples of the type of convoluted (i.e., “higher”, “deeper”) test questions included in PARCC and SBAC tests. They found none.

3.  The authors continue the Common Core sales tendency of attributing benefits to their tests for which no evidence exists. For example, the Fordham report claims that SBAC and PARCC will:

“make traditional ‘test prep’ ineffective” (p. 8)

“allow students of all abilities, including both at-risk and high-achieving youngsters, to demonstrate what they know and can do” (p. 8)

produce “test scores that more accurately predict students’ readiness for entry-level coursework or training” (p. 11)

“reliably measure the essential skills and knowledge needed … to achieve college and career readiness by the end of high school” (p. 11)

“…accurately measure student progress toward college and career readiness; and provide valid data to inform teaching and learning.” (p. 3)

eliminate the problem of “students … forced to waste time and money on remedial coursework.” (p. 73)

help “educators [who] need and deserve good tests that honor their hard work and give useful feedback, which enables them to improve their craft and boost their students’ success.” (p. 73)

The Fordham Institute has not a shred of evidence to support any of these grandiose claims. They share more in common with carnival fortune telling than empirical research. Granted, most of the statements refer to future outcomes, which cannot be known with certainty. But, that just affirms how irresponsible it is to make such claims absent any evidence.

Furthermore, in most cases, past experience would suggest just the opposite of what Fordham asserts. Test prep is more, not less, likely to be effective with SBAC and PARCC tests because the test item formats are complex (or, convoluted), introducing more “construct irrelevant variance”—that is, students will get lower scores for not managing to figure out formats or computer operations issues, even if they know the subject matter of the test. Disadvantaged and at-risk students tend to be the most disadvantaged by complex formatting and new technology.

As for Common Core, SBAC, and PARCC eliminating the “problem of” college remedial courses, such will be done by simply cancelling remedial courses, whether or not they might be needed, and lowering college entry-course standards to the level of current remedial courses.

4.  When not dismissing or denigrating SBAC and PARCC critiques, the Fordham report evades them, even suggesting that critics should not read it: “If you don’t care for the standards…you should probably ignore this study” (p. 4).

Yet, cynically, in the very first paragraph the authors invoke the name of Sandy Stotsky, one of their most prominent adversaries, and a scholar of curriculum and instruction so widely respected she could easily have gotten wealthy had she chosen to succumb to the financial temptation of the Common Core’s profligacy as so many others have. Stotsky authored the Fordham Institute’s “very first study” in 1997, apparently. Presumably, the authors of this report drop her name to suggest that they are broad-minded. (It might also suggest that they are now willing to publish anything for a price.)

Tellingly, one will find Stotsky’s name nowhere after the first paragraph. None of her (or anyone else’s) many devastating critiques of the Common Core tests is either mentioned or referenced. Genuine research does not hide or dismiss its critiques; it addresses them.

Ironically, the authors write, “A discussion of [test] qualities, and the types of trade-offs involved in obtaining them, are precisely the kinds of conversations that merit honest debate.” Indeed.

5.  Instead of writing in neutral language as real researchers do, the authors adopt the habit of coloring their language as Common Core salespeople do. They attach nice-sounding adjectives and adverbs to what they like, and bad-sounding words to what they don’t.

For PARCC and SBAC one reads:

“strong content, quality, and rigor”

“stronger tests, which encourage better, broader, richer instruction”

“tests that focus on the essential skills and give clear signals”

“major improvements over the previous generation of state tests”

“complex skills they are assessing.”

“high-quality assessment”

“high-quality assessments”

“high-quality tests”

“high-quality test items”

“high quality and provide meaningful information”

“carefully-crafted tests”

“these tests are tougher”

“more rigorous tests that challenge students more than they have been challenged in the past”

For other tests one reads:

“low-quality assessments poorly aligned with the standards”

“will undermine the content messages of the standards”

“a best-in-class state assessment, the 2014 MCAS, does not measure many of the important competencies that are part of today’s college and career readiness standards”

“have generally focused on low-level skills”

“have given students and parents false signals about the readiness of their children for postsecondary education and the workforce”

Appraising its own work, Fordham writes:

“groundbreaking evaluation”

“meticulously assembled panels”

“highly qualified yet impartial reviewers”

Considering those who have adopted SBAC or PARCC, Fordham writes:

“thankfully, states have taken courageous steps”

“states’ adoption of college and career readiness standards has been a bold step in the right direction.”

“adopting and sticking with high-quality assessments requires courage.”

A few other points bear mentioning. The Fordham Institute was granted access to operational SBAC and PARCC test items. Over the course of a few months in 2015, the Pioneer Institute, a strong critic of Common Core, PARCC, and SBAC, appealed for similar access to PARCC items. The convoluted run-around responses from PARCC officials excelled at bureaucratic stonewalling. Despite numerous requests, Pioneer never received access.

The Fordham report claims that PARCC and SBAC are governed by “member states”, whereas ACT Aspire is owned by a private organization. Actually, the Common Core Standards are owned by two private, unelected organizations, the Council of Chief State School Officers and the National Governors’ Association, and only each state’s chief school officer sits on PARCC and SBAC panels. Individual states actually have far more say-so if they adopt ACT Aspire (or their own test) than if they adopt PARCC or SBAC. A state adopts ACT Aspire under the terms of a negotiated, time-limited contract. By contrast, a state or, rather, its chief state school officer, has but one vote among many around the tables at PARCC and SBAC. With ACT Aspire, a state controls the terms of the relationship. With SBAC and PARCC, it does not.[xxiv]

Just so you know, on page 71, Fordham recommends that states eliminate any tests that are not aligned to the Common Core Standards, in the interest of efficiency, supposedly.

In closing, it is only fair to mention the good news in the Fordham report. It promises on page 8, “We at Fordham don’t plan to stay in the test-evaluation business”.

[i] Nancy Doorey & Morgan Polikoff. (2016, February). Evaluating the content and quality of next generation assessments. With a Foreword by Amber M. Northern & Michael J. Petrilli. Washington, DC: Thomas P. Fordham Institute. http://edexcellence.net/publications/evaluating-the-content-and-quality-of-next-generation-assessments

[ii] PARCC is the Partnership for Assessment of Readiness for College and Careers; SBAC is the Smarter-Balanced Assessment Consortium; MCAS is the Massachusetts Comprehensive Assessment System; ACT Aspire is not an acronym (though, originally ACT stood for American College Test).

[iii] The reason for inventing a Fordham Institute when a Fordham Foundation already existed may have had something to do with taxes, but it also allows Chester Finn, Jr. and Michael Petrilli to each pay themselves two six figure salaries instead of just one.

[vi] HumRRO has produced many favorable reports for Common Core-related entities, including alignment studies in Kentucky, New York State, California, and Connecticut.

[vii] CCSSO has received 23 grants from the Bill & Melinda Gates Foundation from “2009 and earlier” to 2016 collectively exceeding \$100 million. http://www.gatesfoundation.org/How-We-Work/Quick-Links/Grants-Database#q/k=CCSSO

[ix] Student Achievement Partners has received four grants from the Bill & Melinda Gates Foundation from 2012 to 2015 exceeding \$13 million. http://www.gatesfoundation.org/How-We-Work/Quick-Links/Grants-Database#q/k=%22Student%20Achievement%20Partners%22

[x] The authors write that the standards they use are “based on” the real Standards. But, that is like saying that Cheez Whiz is based on cheese. Some real cheese might be mixed in there, but it’s not the product’s most distinguishing ingredient.

[xi] (e.g., the International Test Commission’s (ITC) Guidelines for Test Use; the ITC Guidelines on Quality Control in Scoring, Test Analysis, and Reporting of Test Scores; the ITC Guidelines on the Security of Tests, Examinations, and Other Assessments; the ITC’s International Guidelines on Computer-Based and Internet-Delivered Testing; the European Federation of Psychologists’ Association (EFPA) Test Review Model; the Standards of the Joint Committee on Testing Practices)

[xii] Despite all the adjectives and adverbs implying newness to PARCC and SBAC as “Next Generation Assessment”, it has all been tried before and failed miserably. Indeed, many of the same persons involved in past fiascos are pushing the current one. The allegedly “higher-order”, more “authentic”, performance-based tests administered in Maryland (MSPAP), California (CLAS), and Kentucky (KIRIS) in the 1990s failed because of unreliable scores; volatile test score trends; secrecy of items and forms; an absence of individual scores in some cases; individuals being judged on group work in some cases; large expenditures of time; inconsistent (and some improper) test preparation procedures from school to school; inconsistent grading on open-ended response test items; long delays between administration and release of scores; little feedback for students; and no substantial evidence after several years that education had improved. As one should expect, instruction had changed as test proponents desired, but without empirical gains or perceived improvement in student achievement. Parents, politicians, and measurement professionals alike overwhelmingly rejected these dysfunctional tests.

See, for example, For California: Michael W. Kirst & Christopher Mazzeo, (1997, December). The Rise, Fall, and Rise of State Assessment in California: 1993-96, Phi Delta Kappan, 78(4) Committee on Education and the Workforce, U.S. House of Representatives, One Hundred Fifth Congress, Second Session, (1998, January 21). National Testing: Hearing, Granada Hills, CA. Serial No. 105-74; Representative Steven Baldwin, (1997, October). Comparing assessments and tests. Education Reporter, 141. See also Klein, David. (2003). “A Brief History Of American K-12 Mathematics Education In the 20th Century”, In James M. Royer, (Ed.), Mathematical Cognition, (pp. 175–226). Charlotte, NC: Information Age Publishing. For Kentucky: ACT. (1993). “A study of core course-taking patterns. ACT-tested graduates of 1991-1993 and an investigation of the relationship between Kentucky’s performance-based assessment results and ACT-tested Kentucky graduates of 1992”. Iowa City, IA: Author; Richard Innes. (2003). Education research from a parent’s point of view. Louisville, KY: Author. http://www.eddatafrominnes.com/index.html ; KERA Update. (1999, January). Misinformed, misled, flawed: The legacy of KIRIS, Kentucky’s first experiment. For Maryland: P. H. Hamp, & C. B. Summers. (2002, Fall). “Education.” In P. H. Hamp & C. B. Summers (Eds.), A guide to the issues 2002–2003. Maryland Public Policy Institute, Rockville, MD. http://www.mdpolicy.org/docLib/20051030Education.pdf ; Montgomery County Public Schools. (2002, Feb. 11). “Joint Teachers/Principals Letter Questions MSPAP”, Public Announcement, Rockville, MD. http://www.montgomeryschoolsmd.org/press/index.aspx?pagetype=showrelease&id=644 ; HumRRO. (1998). Linking teacher practice with statewide assessment of education. Alexandria, VA: Author. http://www.humrro.org/corpsite/page/linking-teacher-practice-statewide-assessment-education

[xiv] A rationale is offered for why they had to develop a brand new set of test evaluation criteria (p. 13). Fordham claims that new criteria were needed, which weighted some criteria more than others. But, weights could easily be applied to any criteria, including the tried-and-true, preexisting ones.

[xv] For an extended critique of the CCSSO Criteria employed in the Fordham report, see “Appendix A. Critique of Criteria for Evaluating Common Core-Aligned Assessments” in Mark McQuillan, Richard P. Phelps, & Sandra Stotsky. (2015, October). How PARCC’s false rigor stunts the academic growth of all students. Boston: Pioneer Institute, pp. 62-68. http://pioneerinstitute.org/news/testing-the-tests-why-mcas-is-better-than-parcc/

[xvi] Doorey & Polikoff, p. 14.

[xvii] MCAS bests PARCC and SBAC according to several criteria specific to the Commonwealth, such as the requirements under the current Massachusetts Education Reform Act (MERA) as a grade 10 high school exit exam, that tests students in several subject fields (and not just ELA and math), and provides specific and timely instructional feedback.

[xviii] McQuillan, M., Phelps, R.P., & Stotsky, S. (2015, October). How PARCC’s false rigor stunts the academic growth of all students. Boston: Pioneer Institute. http://pioneerinstitute.org/news/testing-the-tests-why-mcas-is-better-than-parcc/

[xix] It is perhaps the most enlightening paradox that, among Common Core proponents’ profuse expulsion of superlative adjectives and adverbs advertising their “innovative”, “next generation” research results, the words “deeper” and “higher” mean the same thing.

[xx] The document asserts, “The Common Core State Standards identify a number of areas of knowledge and skills that are clearly so critical for college and career readiness that they should be targeted for inclusion in new assessment systems.” Linda Darling-Hammond, Joan Herman, James Pellegrino, Jamal Abedi, J. Lawrence Aber, Eva Baker, Randy Bennett, Edmund Gordon, Edward Haertel, Kenji Hakuta, Andrew Ho, Robert Lee Linn, P. David Pearson, James Popham, Lauren Resnick, Alan H. Schoenfeld, Richard Shavelson, Lorrie A. Shepard, Lee Shulman, and Claude M. Steele. (2013). Criteria for high-quality assessment. Stanford, CA: Stanford Center for Opportunity Policy in Education; Center for Research on Student Standards and Testing, University of California at Los Angeles; and Learning Sciences Research Institute, University of Illinois at Chicago, p. 7. https://edpolicy.stanford.edu/publications/pubs/847

[xxi] McQuillan, Phelps, & Stotsky, p. 46.

[xxiii] Linda Darling-Hammond, et al., pp. 16-18. https://edpolicy.stanford.edu/publications/pubs/847

[xxiv] For an in-depth discussion of these governance issues, see Peter Wood’s excellent Introduction to Drilling Through the Core, http://www.amazon.com/gp/product/0985208694

Fordham Institute’s pretend research was originally published on Nonpartisan Education Blog

Fordham Institute’s pretend research was originally published on Nonpartisan Education Blog

## Fordham report predictable, conflicted

On November 17, the Massachusetts Board of Elementary and Secondary Education (BESE) will decide the fate of the Massachusetts Comprehensive Assessment System (MCAS) and the Partnership for Assessment of College Readiness for College and Careers (PARCC) in the Bay State. MCAS is homegrown; PARCC is not. Barring unexpected compromises or subterfuges, only one program will survive.

Over the past year, PARCC promoters have released a stream of reports comparing the two testing programs. The latest arrives from the Thomas B. Fordham Institute in the form of a partial “evaluation of the content and quality of the 2014 MCAS and PARCC “relative to” the “Criteria for High Quality Assessments”[i] developed by one of the organizations that developed Common Core’s standards—with the rest of the report to be delivered in January, it says.[ii]

PARCC continues to insult our intelligence. The language of the “special report” sent to Mitchell Chester, Commissioner of Elementary and Secondary Education, reads like a legitimate study.[iii] The research it purports to have done even incorporated some processes typically employed in studies with genuine intentions of objectivity.

No such intentions could validly be ascribed to the Fordham report.

First, Common Core’s primary private financier, the Bill & Melinda Gates Foundation, pays the Fordham Institute handsomely to promote the standards and its associated testing programs. A cursory search through the Gates Foundation web site reveals \$3,562,116 granted to Fordham since 2009 expressly for Common Core promotion or “general operating support.”[iv] Gates awarded an additional \$653,534 between 2006 and 2009 for forming advocacy networks, which have since been used to push Common Core. All of the remaining Gates-to-Fordham grants listed supported work promoting charter schools in Ohio (\$2,596,812), reputedly the nation’s worst.[v]

The other research entities involved in the latest Fordham study either directly or indirectly derive sustenance at the Gates Foundation dinner table:

– the Human Resources Research Organization (HumRRO), which will deliver another pro-PARCC report sometime soon,[vi]
– the Council of Chief State School Officers (CCSSO), co-holder of the Common Core copyright and author of the “Criteria.”, [vii]
– the Stanford Center for Opportunity Policy in Education (SCOPE), headed by Linda Darling-Hammond, the chief organizer of the other federally-subsidized Common Core-aligned testing program, the Smarter-Balanced Assessment Consortium (SBAC),[viii] and
– Student Achievement Partners, the organization that claims to have inspired the Common Core standards[ix]

Fordham acknowledges the pervasive conflicts of interest it claims it faced in locating people to evaluate MCAS versus PARCC. “…it is impossible to find individuals with zero conflicts who are also experts”.[x] But, the statement is false; hundreds, perhaps even thousands, of individuals experienced in “alignment or assessment development studies” were available.[xi] That they were not called reveals Fordham’s preferences.

A second reason Fordham’s intentions are suspect rests with their choice of evaluation criteria. The “bible” of test developers is the Standards for Educational and Psychological Testing, jointly produced by the American Psychological Association, National Council on Measurement in Education, and the American Educational Research Association. Fordham did not use it.

Instead, Fordham chose to reference an alternate set of evaluation criteria concocted by the organization that co-sponsored the development of Common Core’s standards (Council for Chief State School Officers, or CCSSO), drawing on the work of Linda Darling-Hammond’s SCOPE, the Center for Research on Educational Standards and Student Testing (CRESST), and a handful of others. Thus, Fordham compares PARCC to MCAS according to specifications that were designed for PARCC.[xii]

Had Fordham compared MCAS and PARCC using the Standards for Educational and Psychological Testing, MCAS would have passed and PARCC would have flunked. PARCC has not yet accumulated the most basic empirical evidence of reliability, validity, or fairness, and past experience with similar types of assessments suggest it will fail on all three counts.[xiii]

Third, PARCC should have been flunked had Fordham compared MCAS and PARCC using all 24+ of CCSSO’s “Criteria.” But Fordham chose to compare on only 15 of the criteria.[xiv] And those just happened to be the criteria favoring PARCC.

Fordham agreed to compare the two tests with respect to their alignment to Common Core-based criteria. With just one exception, the Fordham study avoided all the criteria in the groups “Meet overall assessment goals and ensure technical quality”, “Yield valuable report on student progress and performance”, “Adhere to best practices in test administration”, and “State specific criteria”[xv]

Not surprisingly, Fordham’s “memo” favors the Bay State’s adoption of PARCC. However, the authors of How PARCC’s false rigor stunts the academic growth of all students[xvi], released one week before Fordham’s “memo,” recommend strongly against the official adoption of PARCC after an analysis of its test items in reading and writing. They also do not recommend continuing with the current MCAS, which is also based on Common Core’s mediocre standards, chiefly because the quality of the grade 10 MCAS tests in math and ELA has deteriorated in the past seven or so years for reasons that are not yet clear. Rather, they recommend that Massachusetts return to its effective pre-Common Core standards and tests and assign the development and monitoring of the state’s mandated tests to a more responsible agency.

Perhaps the primary conceit of Common Core proponents is that ordinary multiple-choice-predominant standardized tests ignore some, and arguably the better, parts of learning (the deeper, higher, more rigorous, whatever)[xvii]. Ironically, it is they—opponents of traditional testing regimes—who propose that standardized tests measure everything. By contrast, most traditional standardized test advocates do not suggest that standardized tests can or should measure any and all aspects of learning.

Consider this standard from the Linda Darling-Hammond, et al. source document for the CCSSO criteria:

“Research: Conduct sustained research projects to answer a question (including a self-generated question) or solve a problem, narrow or broaden the inquiry when appropriate, and demonstrate understanding of the subject under investigation. Gather relevant information from multiple authoritative print and digital sources, use advanced searches effectively, and assess the strengths and limitations of each source in terms of the specific task, purpose, and audience.”[xviii]

Who would oppose this as a learning objective? But, does it make sense as a standardized test component? How does one objectively and fairly measure “sustained research” in the one- or two-minute span of a standardized test question? In PARCC tests, this is done by offering students snippets of documentary source material and grading them as having analyzed the problem well if they cite two of those already-made-available sources.

But, that is not how research works. It is hardly the type of deliberation that comes to most people’s mind when they think about “sustained research”. Advocates for traditional standardized testing would argue that standardized tests should be used for what standardized tests do well; “sustained research” should be measured more authentically.

The authors of the aforementioned Pioneer Institute report recommend, as their 7th policy recommendation for Massachusetts:

“Establish a junior/senior-year interdisciplinary research paper requirement as part of the state’s graduation requirements—to be assessed at the local level following state guidelines—to prepare all students for authentic college writing.”[xix]

PARCC and the Fordham Institute propose that they can validly, reliably, and fairly measure the outcome of what is normally a weeks- or months-long project in a minute or two.[xx] It is attempting to measure that which cannot be well measured on standardized tests that makes PARCC tests “deeper” than others. In practice, the alleged deeper parts of PARCC are the most convoluted and superficial.

Appendix A of the source document for the CCSSO criteria provides three international examples of “high-quality assessments” in Singapore, Australia, and England.[xxi] None are standardized test components. Rather, all are projects developed over extended periods of time—weeks or months—as part of regular course requirements.

Common Core proponents scoured the globe to locate “international benchmark” examples of the type of convoluted (i.e., “higher”, “deeper”) test questions included in PARCC and SBAC tests. They found none.

Dr. Richard P. Phelps is editor or author of four books: Correcting Fallacies about Educational and Psychological Testing (APA, 2008/2009); Standardized Testing Primer (Peter Lang, 2007); Defending Standardized Testing (Psychology Press, 2005); and Kill the Messenger (Transaction, 2003, 2005), and founder of the Nonpartisan Education Review (http://nonpartisaneducation.org).

[ii] Michael J. Petrilli & Amber M. Northern. (2015, October 30). Memo to Dr. Mitchell Chester, Commissioner of Elementary and Secondary Education, Massachusetts Department of Elementary and Secondary Education. Washington, DC: Thomas P. Fordham Institute. http://edexcellence.net/articles/evaluation-of-the-content-and-quality-of-the-2014-mcas-and-parcc-relative-to-the-ccsso

[iii] Nancy Doorey & Morgan Polikoff. (2015, October). Special report: Evaluation of the Massachusetts Comprehensive Assessment System (MCAS) and the Partnership for the Assessment of Readiness for College and Careers (PARCC). Washington, DC: Thomas P. Fordham Institute. http://edexcellence.net/articles/evaluation-of-the-content-and-quality-of-the-2014-mcas-and-parcc-relative-to-the-ccsso

[vi] HumRRO has produced many favorable reports for Common Core-related entities, including alignment studies in Kentucky, New York State, California, and Connecticut.

[vii] CCSSO has received 22 grants from the Bill & Melinda Gates Foundation from “2009 and earlier” to 2015 exceeding \$90 million. http://www.gatesfoundation.org/How-We-Work/Quick-Links/Grants-Database#q/k=CCSSO

[ix] Student Achievement Partners has received four grants from the Bill & Melinda Gates Foundation from 2012 to 2015 exceeding \$13 million. http://www.gatesfoundation.org/How-We-Work/Quick-Links/Grants-Database#q/k=%22Student%20Achievement%20Partners%22

[x] Doorey & Polikoff, p. 4.

[xi] To cite just one example, the world-renowned Center for Educational Measurement at the University of Massachusetts-Amherst has accumulated abundant experience conducting alignment studies.

[xii] For an extended critique of the CCSSO criteria employed in the Fordham report, see “Appendix A. Critique of Criteria for Evaluating Common Core-Aligned Assessments” in Mark McQuillan, Richard P. Phelps, & Sandra Stotsky. (2015, October). How PARCC’s false rigor stunts the academic growth of all students. Boston: Pioneer Institute, pp. 62-68. http://pioneerinstitute.org/news/testing-the-tests-why-mcas-is-better-than-parcc/

[xiii] Despite all the adjectives and adverbs implying newness to PARCC and SBAC as “Next Generation Assessment”, it has all been tried before and failed miserably. Indeed, many of the same persons involved in past fiascos are pushing the current one. The allegedly “higher-order”, more “authentic”, performance-based tests administered in Maryland (MSPAP), California (CLAS), and Kentucky (KIRIS) in the 1990s failed because of unreliable scores; volatile test score trends; secrecy of items and forms; an absence of individual scores in some cases; individuals being judged on group work in some cases; large expenditures of time; inconsistent (and some improper) test preparation procedures from school to school; inconsistent grading on open-ended response test items; long delays between administration and release of scores; little feedback for students; and no substantial evidence after several years that education had improved. As one should expect, instruction had changed as test proponents desired, but without empirical gains or perceived improvement in student achievement. Parents, politicians, and measurement professionals alike overwhelmingly rejected these dysfunctional tests.

See, for example, For California: Michael W. Kirst & Christopher Mazzeo, (1997, December). The Rise, Fall, and Rise of State Assessment in California: 1993-96, Phi Delta Kappan, 78(4) Committee on Education and the Workforce, U.S. House of Representatives, One Hundred Fifth Congress, Second Session, (1998, January 21). National Testing: Hearing, Granada Hills, CA. Serial No. 105-74; Representative Steven Baldwin, (1997, October). Comparing assessments and tests. Education Reporter, 141. See also Klein, David. (2003). “A Brief History Of American K-12 Mathematics Education In the 20th Century”, In James M. Royer, (Ed.), Mathematical Cognition, (pp. 175–226). Charlotte, NC: Information Age Publishing. For Kentucky: ACT. (1993). “A study of core course-taking patterns. ACT-tested graduates of 1991-1993 and an investigation of the relationship between Kentucky’s performance-based assessment results and ACT-tested Kentucky graduates of 1992”. Iowa City, IA: Author; Richard Innes. (2003). Education research from a parent’s point of view. Louisville, KY: Author. http://www.eddatafrominnes.com/index.html ; KERA Update. (1999, January). Misinformed, misled, flawed: The legacy of KIRIS, Kentucky’s first experiment. For Maryland: P. H. Hamp, & C. B. Summers. (2002, Fall). “Education.” In P. H. Hamp & C. B. Summers (Eds.), A guide to the issues 2002–2003. Maryland Public Policy Institute, Rockville, MD. http://www.mdpolicy.org/docLib/20051030Education.pdf ; Montgomery County Public Schools. (2002, Feb. 11). “Joint Teachers/Principals Letter Questions MSPAP”, Public Announcement, Rockville, MD. http://www.montgomeryschoolsmd.org/press/index.aspx?pagetype=showrelease&id=644 ; HumRRO. (1998). Linking teacher practice with statewide assessment of education. Alexandria, VA: Author. http://www.humrro.org/corpsite/page/linking-teacher-practice-statewide-assessment-education

[xiv] Doorey & Polikoff, p. 23.

[xv] MCAS bests PARCC according to several criteria specific to the Commonwealth, such as the requirements under the current Massachusetts Education Reform Act (MERA) as a grade 10 high school exit exam, that tests students in several subject fields (and not just ELA and math), and provides specific and timely instructional feedback.

[xvi] McQuillan, M., Phelps, R.P., & Stotsky, S. (2015, October). How PARCC’s false rigor stunts the academic growth of all students. Boston: Pioneer Institute. http://pioneerinstitute.org/news/testing-the-tests-why-mcas-is-better-than-parcc/

[xvii] It is perhaps the most enlightening paradox that, among Common Core proponents’ profuse expulsion of superlative adjectives and adverbs advertising their “innovative”, “next generation” research results, the words “deeper” and “higher” mean the same thing.

[xviii] The document asserts, “The Common Core State Standards identify a number of areas of knowledge and skills that are clearly so critical for college and career readiness that they should be targeted for inclusion in new assessment systems.” Linda Darling-Hammond, Joan Herman, James Pellegrino, Jamal Abedi, J. Lawrence Aber, Eva Baker, Randy Bennett, Edmund Gordon, Edward Haertel, Kenji Hakuta, Andrew Ho, Robert Lee Linn, P. David Pearson, James Popham, Lauren Resnick, Alan H. Schoenfeld, Richard Shavelson, Lorrie A. Shepard, Lee Shulman, and Claude M. Steele. (2013). Criteria for high-quality assessment. Stanford, CA: Stanford Center for Opportunity Policy in Education; Center for Research on Student Standards and Testing, University of California at Los Angeles; and Learning Sciences Research Institute, University of Illinois at Chicago, p. 7. https://edpolicy.stanford.edu/publications/pubs/847

[xix] McQuillan, Phelps, & Stotsky, p. 46.

[xxi] Linda Darling-Hammond, et al., pp. 16-18. https://edpolicy.stanford.edu/publications/pubs/847

Fordham report predictable, conflicted was originally published on Nonpartisan Education Blog

Fordham report predictable, conflicted was originally published on Nonpartisan Education Blog

## Wayne Bishop’s observations on the Aspen Ideas Festival session, “Is Math Important?”

Editors’ Note:

David Leonhardt is Washington Bureau Chief for the New York Times, won a Pulitzer Prize for his reporting on economic issues, and majored in applied mathematics as an undergraduate at Yale. Mr. Leonhardt chaired the panel, “Deep Dive: Is Math Important?” an “event” in the program track “The Beauty of Mathematics”. Other program track events included individual lectures from each of the panelists.

Mathematicians might consider the panel composition rather odd, and ideologically one-sided. Three panelists are not mathematicians, but are wholehearted believers in constructivist approaches to math education, often derided as “fuzzy math”. Two of them claim, ludicrously, that high-achieving East Asian countries teach math their way. The aforementioned panelists are: journalist Elizabeth Green, education professor Jo Boaler, and College Board’s David Coleman, with a degree in English lit and classical philosophy. When only one side is allowed to talk, of course, it can make any claims it likes.

Watch for yourself: Aspen Ideas Festival: Deep Dive: Is Math Important?

http://video.pbs.org/video/2365521689/

Professor Bishop’s essay, written in the form of a letter to David Leonhardt, can be found here.
http://nonpartisaneducation.org/Review/Essays/v11n1.pdf

## Wayne Bishop’s Response to Ratner and Wu (Wall Street Journal)

Making Math Education Even Worse, by Marina Ratner,

http://online.wsj.com/articles/marina-ratner-making-math-education-even-worse-1407283282

————————————————
Dear Hung-Hsi,

It pains me to write but in spite of all of your precollegiate mathematics education knowledge and contributions, Prof. Ratner got it right and you “missed the boat” in response:
http://online.wsj.com/articles/if-only-teaching-mathematics-was-as-clear-as-1-1-2-letters-to-the-editor-1408045221
The CA Math Content Standards were – and still are – the best in the country. They have problems; e.g., there is too much specialized focus in its thread on Statistics, Data Analysis, and Probability and, even worse, Mathematical Reasoning. No sensible person can be against mathematical reasoning, of course, but that is exactly the point. Sensible people embed it everywhere and, as a standalone item, it becomes almost meaningless – hence the paucity (as in none) of CA Key Standards in that category. The writers included it to help ensure Board of Ed approval because most professional math educators were strongly objecting to the entire Stanford approach. Perhaps the most egregious, is your characterization of California’s problems using poison words: “rote-learning of linear equations by not preparing students for the correct definition of slope.” This is at best misleading and closer to being flat wrong:
—————————————–
From the introduction to Grade 7:
“They graph linear functions and understand the idea of slope and its relation to ratio.”
This is followed specifically with two Key Standards and examples:
3.3 Graph linear functions, noting that the vertical change (change in y-value) per unit of horizontal change (change in x-value) is always the same and know that the ratio (“rise over run”) is called the slope of a graph.
3.4 Plot the values of quantities whose ratios are always the same (e.g., cost to the number of an item, feet to inches, circumference to diameter of a circle). Fit a line to the plot and understand that the slope of the line equals the ratio of the quantities.
—————————————–
In what way(s) do you find the relevant 8th grade standard in the CCSS-M, Expressions and Equations (EE.8 #5,6), to be conceptually superior? (The word is used once in the intro to Grade 7 but it is not mentioned thereafter.) Formally proving that all pairs of distinct points determine similar triangles so that this ratio is well-defined would be mathematically necessary to be completely logical but I doubt if that’s what you meant particularly since traditional proof has been downplayed so badly even in the high school CCSS-M, much less 8th grade, especially in comparison with the CA Math Content Standards.

Regarding the general concept of competent Algebra 1 (not some pretense thereof), it was, it is, and it will remain standard in 8th grade (if not already accomplished in 7th grade) for self-respecting, academically-oriented private schools. As you well know, the Stanford Math group who wrote the CA Standards started with the egalitarian notion that this should be an opportunity for everyone including those who do not have access to such schools. It cannot be and was not intended to be just imposed that traditional Algebra 1 be the math course for all 8th graders but the group worked backwards from that target step-by-step through the grades in order to get there comfortably (such as developing the concept of slope in 7th grade that you appear to have missed). Is every detail spelled out? Of course not, nor should they be, but the key ideas – even set off as Key Standards – are there and presented considerably more clearly than in the CCSS-M.

There is statistical evidence that the goal did improve the state of mathematics competence in California, but we both know the CA Math Content Standards fell well short of the ideal. It was not – as your words could be interpreted to imply – that they reflect an inherent lack of development of student understanding. The primary villain is the overwhelming mandate for chronological grade placement (age-5) for incoming students and almost universal social promotion. Far too many students are not competent with the standards at their grade levels – sometimes years below – yet they move on anyway. Algebra in 8th grade – Algebra in 11th grade or even Algebra in college – is not realistic for all but truly gifted students who lack easily identifiable mathematics antecedents. A less common problem, but damaging to our most talented students, is the reverse situation. Advancement in grade level (as was done with my son at his private school and now chair of Chemistry and Biochemistry at Amherst College) is almost unheard of. Although mandated by many districts, and underscored by the API scoring of schools, mandating that all students be in an honest Algebra class in 8th grade without a reasonable level of competence with the Standards of earlier grades was never the intention. It was to be the opportunity, not the mandate.

“Moreover, Common Core does not place a ceiling on achievement. What the standards do provide are key stepping stones to higher-level math such as trigonometry, calculus and beyond.”

Although these words are regularly repeated, reality is the diametric opposite. Across California, CPM (supposedly, College Preparatory Mathematics) is back with a vengeance. Ironically, it was the very catalyst that spawned the now defunct Mathematically Correct and it pulled its submission to California from the 2001 approval process rather than be rejected by our CRP (Content Review Panel). You’ll recall that it and San Francisco State’s IMP were among the federally blessed “Exemplary” programs for which the only mathematician, UT-SA’s Manuel P. Berriozábal, refused to sign off. Weren’t you among the signatories of David Klein’s full-page letter of objection in the Washington Post? One of CPM’s long-standing goals is to have ALL assessments – even final examinations – done collectively with one’s assigned group. It makes for a wonderful ruse – all students can appear to be meeting the “standards” of the course (even if absent!) – while deeply frustrating those students who are “getting it” (often with direct instruction by some family member who knows the subject). Trigonometry, calculus, and beyond from any of CPM, IMP, Core-Plus (all self-blessed as CCSS-M compatible)? It just doesn’t happen. However, from the homepage of Core-Plus:

“The new Common Core State Standards (CCSS) edition of Core-Plus Mathematics builds on the strengths of previous editions that were cited as Exemplary by the U.S. Department of Education Expert Panel on Mathematics and Science”

What did happen – may already be happening again? Beneath the horizon, schools began to offer a traditional alternative to provide an opportunity for adequate preparation for knowledgeable students with math-based career aspirations. What also happened (but may not be successful this time because of the SBAC or PARCC state examinations?) was that other students and their parents petitioned their Boards of Education for an elective choice and, if unfettered choice was granted, the death knell sounded on the innovative “deeper understanding” curriculum and pedagogy.

Finally, you do acknowledge the ridiculous nature of the 6th grade “picture-drawing frenzy” observed by Prof. Ratner but seem to imply it was an isolated incident instead of her description, “this model-drawing mania went on in my grandson’s class for the entire year.” The fact is that such mis-interpretations of “teaching for deeper understanding” are going on for entire years in classrooms – in entire districts – all across the country; they are even taught by professional math educators as mandated by Common Core. You described her observation as a “failure to properly implement Common Core” and I am sure that you believe that to be the case but your conviction is belied by the fact that one of the three primary writers of the CCSS-M and the head of the SBAC-M is Phil Daro (bachelors degree in English Lit). Phil Daro has been strongly influential in precollegiate mathematics education – curricula and pedagogy – across California for decades, my first working acquaintance with him was in 1988, months prior to the first NCTM Standards. His vision for the “right” way to conduct mathematics classrooms (not “to teach”) helped lead to the 1992 CA Math Framework, MathLand-type curricula, and the ensuing California battles of the Math Wars with our temporary respite beginning in late 1997. Unfortunately, his vision is not only reinvigorated here in California, it is now a huge national problem and Prof. Ratner “nailed it”.

Wayne Bishop

Wayne Bishop’s Response to Ratner and Wu (Wall Street Journal) was originally published on Nonpartisan Education Blog

Wayne Bishop’s Response to Ratner and Wu (Wall Street Journal) was originally published on Nonpartisan Education Blog

Wayne Bishop’s Response to Ratner and Wu (Wall Street Journal) was originally published on Nonpartisan Education Blog