Surprise! SBAC and CRESST stonewall public records request for their financial records

Say what you will about Achieve, PARCC, Fordham, CCSSO, and NGA— some of the organizations responsible for promoting the Common Core Initiative on us all. But, their financial records are publicly available.

Not so for some other organizations responsible for the same Common Core promotion. The Smarter Balanced Assessment Consortium (SBAC) and the Center for Research on Educational Standards and Student Testing (CRESST) have absorbed many millions of taxpayer and foundation dollars over the years. But, their financial records have been hidden inside the vast, nebulous cocoon of the University of California – Los Angeles (UCLA). UCLA’s financial records, of course, are publicly available, but amounts there are aggregated at a level that subsumes thousands of separate, individual entities.

UCLA is a tax-supported state institution, however, and California has an open records law on the books. After some digging, I located the UCLA office responsible for records requests and wrote to them. Following is a summary of our correspondence to date:

 

July 5, 2017

Greetings:

I hope that you can help me. I have spent a considerable amount of time clicking around in search of financial reports for the Smarter Balanced Assessment Consortium (SBAC) and the National Center for Research on Evaluation, Standards, and Student Testing (CRESST), both “housed” at UCLA (or, until just recently in SBAC’s case). Even after many hours of web searching, I still have no clue as to where these data might be found.

Both organizations are largely publicly funded through federal grants. I would like to obtain revenue and expenditure detail on the order of what a citizen would expect to see in a nonprofit organization’s Form 990. I would be happy to search through a larger data base that contains relevant financial details for all of UCLA, so long as the details for SBAC and CRESST are contained within and separately labeled.

I would like annual records spanning the lifetimes of each organization: SBAC only goes back several years, but CRESST goes back to the 1980s (in its early years, it was called the Center for the Study of Evaluation).

Please tell me what I need to do next.

Thank you for your time and attention.

Best Wishes, Richard Phelps

 

July 6, 2017

RE: Acknowledgement of Public Records Request – PRR # 17-4854

Dear Mr. Phelps:

This letter is to acknowledge your request under the California Public Records Act (CPRA) dated July 5, 2017, herein enclosed. Information Practices (IP) is notifying the appropriate UCLA offices of your request and will identify, review, and release all responsive documents in accordance with relevant law and University policy.

Under the CPRA, Cal. Gov’t Code Section 6253(b), UCLA may charge for reproduction costs and/or programming services. If the cost is anticipated to be greater than $50.00 or the amount you authorized in your original request, we will contact you to confirm your continued interest in receiving the records and your agreement to pay the charges. Payment is due prior to the release of the records.

As required under Cal. Gov’t Code Section 6253, UCLA will respond to your request no later than the close of business on July 14, 2017. Please note, though, that Section 6253 only requires a public agency to make a determination within 10 days as to whether or not a request is seeking records that are publicly disclosable and, if so, to provide the estimated date that the records will be made available. There is no requirement for a public agency to actually supply the records within 10 days of receiving a request, unless the requested records are readily available. Still, UCLA prides itself on always providing all publicly disclosable records in as timely a manner as possible.

Should you have any questions, please contact me at (310) 794-8741 or via email at pahill@finance.ucla.edu and reference the PRR number found above in the subject line.

Sincerely,

Paula Hill

Assistant Manager, Information Practices

 

July 14, 2017

RE: Public Records Request – PRR # 17-4854

Dear Mr. Phelps:

The purpose of this letter is to confirm that UCLA Information Practices (IP) continues to work on your public records request dated July 5, 2017. As allowed pursuant to Cal. Gov’t Code Section 6253(c), we require additional time to respond to your request, due to the following circumstance(s):

The need to search for and collect the requested records from field facilities or other establishments that are separate from the office processing the request.

IP will respond to your request no later than the close of business on July 28, 2017 with an estimated date that responsive documents will be made available.

Should you have any questions, please contact me at (310) 794-8741 or via email at pahill@finance.ucla.edu and reference the PRR number found above in the subject line.

Sincerely,

Paula Hill

Assistant Manager, Information Practices

 

July 28, 2017

Dear Mr. Phelps,

Please know UCLA Information Practices continues to work on your public records request, attached for your reference. I will provide a further response regarding your request no later than August 18, 2017.

Should you have any questions, please contact me at (310) 794-8741 or via email and reference the PRR number found above in the subject line.

Kind regards,

Paula Hill

Assistant Manager

UCLA Information Practices

 

July 29, 2017

Thank you. RP

 

August 18, 2017

Re: Public Records Request – PRR # 17-4854

Dear Mr. Richard Phelps:

UCLA Information Practices (IP) continues to work on your public records request dated July 5, 2017. As required under Cal. Gov’t Code Section 6253, and as noted in our email communication with you on July 28, 2017, we are now able to provide you with the estimated date that responsive documents will be made available to you, which is September 29, 2017.

As the records are still being compiled and/or reviewed, we are not able at this time to provide you with any potential costs, so that information will be furnished in a subsequent communication as soon as it is known.

Should you have any questions, please contact me at (310) 794-8741 or via email at pahill@finance.ucla.edu and reference the PRR number found above in the subject line.

Sincerely,

Paula Hill

Assistant Manager, Information Practices

 

September 29, 2017

Dear Mr. Richard Phelps,

Unfortunately, we must revise the estimated availability date regarding your attached request as the requisite review has not yet been completed. We expect to provide a complete response by November 30, 2017. We apologize for the delay.

Should you have any questions, please contact our office at (310) 794-8741 or via email, and reference the PRR number found above in the subject line.

Best regards,

UCLA Information Practices

 

September 29, 2017

I believe that if you are leaving it up to CRESST and SBAC to voluntarily provide the information, they will not be ready Nov. 30 either. RP

‘One size fits all’ national tests not deeper or more rigorous

http://www.educationnews.org/education-policy-and-politics/one-size-fits-all-national-tests-not-deeper-or-more-rigorous/

Some say that now is a wonderful time to be a psychometrician — a testing and measurement professional. There are jobs aplenty, with high pay and great benefits. Work is available in the private sector at test development firms; in recruiting, hiring, and placement for corporations; in public education agencies at all levels of government; in research and teaching at universities; in consulting; and many other spots.

Moreover, there exist abundant opportunities to work with new, innovative, “cutting edge”, methods, techniques, and technologies. The old, fuddy-duddy, paper-and-pencil tests with their familiar multiple-choice, short-answer, and essay questions are being replaced by new-fangled computer-based, internet-connected tests with graphical interfaces and interactive test item formats.

In educational testing, the Common Core Standards Initiative (CCSI), and its associated tests, developed by the Smarter-Balanced Assessment Consortium (SBAC) and the Partnership for Assessment of Readiness for College and Careers (PARCC), has encouraged the movement toward “21st century assessments”. Much of the torrential rain of funding, burst forth from federal and state governments and clouds of wealthy foundations, has pooled in the pockets of psychometricians.

At the same time, however, the country’s most authoritative psychometricians—the very people who would otherwise have been available to guide, or caution against, the transition to the newer standards and tests—have been co-opted. In some fashion or another, they now work for the CCSI. Some work for the SBAC or PARCC consortia directly, some work for one or more of the many test development firms hired by the consortia, some help the CCSI in other capacities. Likely, they have all signed confidentiality agreements (i.e., “gag orders”).

Psychometricians who once had been very active in online chat rooms or other types of open discussion forums on assessment policy no longer are, unless to proffer canned promotions for the CCSI entities they now work for. They are being paid well. They may be doing work they find new, interesting, and exciting. But, with their loss of independence, society has lost perspective.

Perhaps the easiest vantage point from which to see this loss of perspective is in the decline of adherence to test development quality standards, those that prescribe the behavior of testing and measurement professionals themselves. Over the past decade, for example, the International Test Commission (ITC) alone has developed several sets of standards.

Perhaps the oldest set of test quality standards was established originally by the American Psychological Association (APA) and was updated most recently in 2014—the Standards for Educational and Psychological Testing (AERA, NCME, APA). It contains hundreds of individual standards. The CCSI as a whole, and the SBAC and PARCC tests in particular, fail to meet many of them.

The problem starts with what many professionals consider the testing field’s “prime directive”—Standard 1.0 (AERA, NCME, APA, p.23). It reads as follows:

“Clear articulation of each intended test score interpretation for a specified use should be set forth, and appropriate validity evidence in support of each intended interpretation should be provided.”

That is, a test should be validated for each purpose for which it is intended to be used before it is used for that purpose. Before it is used to make important decisions. And, before it is advertised as serving that purpose.

Just as states were required by the Race to the Top competition for federal funds to accept Common Core standards before they had even been written, CCSI proponents have boasted about their new consortium tests’ wonderful benefits since before test development even began. They claimed unproven qualities about then non-existent tests because most CCSI proponents do not understand testing, or they are paid not to understand.

In two fundamental respects, the PARCC and SBAC tests will never match their boosters’ claims nor meet basic accepted test development standards. First, single tests are promised to measure readiness for too many and too disparate outcomes—college and careers—that is, all possible futures. It is implied that PARCC and SBAC will predict success in art, science, plumbing, nursing, carpentry, politics, law enforcement …any future one might wish for.

This is not how it is done in educational systems that manage multiple career pathways well. There, in Germany, Switzerland, Japan, Korea, and, unfortunately, few jurisdictions in the U.S., a range of different types of tests are administered, each appropriately designed for their target professions. Aspiring plumbers take plumbing tests. Aspiring medical workers take medical tests. And, those who wish to prepare for more advanced degrees might take more general tests that predict their aptitude to succeed in higher education institutions.

But that isn’t all. SBAC and PARCC are said to be aligned to the K-12 Common Core standards, too. That is, they both summarize mastery of past learning and predict future success. One test purports to measure how well students have done in high school, and how well they will do in either the workplace or in college, three distinctly different environments, and two distinctly different time periods.

PARCC and SBAC are being sold as replacements for state high school exit exams, for 4-year college admission tests (e.g., the SAT and ACT), for community college admission tests (e.g., COMPASS and ACCUPLACER), and for vocational aptitude tests (e.g., ASVAB). Problem is, these are very different types of tests. High school exit exams are generally not designed to measure readiness for future activity but, rather, to measure how well students have learned what they were taught in elementary and secondary schools. We have high school exit exams because citizens believe it important for their children to have learned what is taught there. Learning Civics well in high school, for example, may not correlate highly with how well a student does in college or career but many nonetheless consider it important for our republic that its citizens learn the topic

High school exit exams are validated by their alignment with the high school curriculum, or content standards. By contrast, admission or aptitude tests are validated by their correlation with desired future outcomes—grades, persistence, productivity, and the like in college—their predictive validity. In their pure, optimal forms, a high school exit exam, a college admission test, and vocational aptitude tests bear only a slight resemblance to each other. They are different tests because they have different purposes and, consequently, require different validations.

————

Let’s assume for the moment that the Common Core consortia tests, PARCC and SBAC, can validly measure all that is claimed for them—mastery of the high school curriculum and success in further education and in the workplace. The fact is no evidence has yet been produced that verifies any of these things. And, remember, the proof of, and the claims about, a new test’s virtues are supposed to be provided before the test is used purposefully.

Sure, Common Core proponents claim to have just recently validated their consortia tests for correlation with college outcomes , for alignment with elementary and secondary school content standards, and for technical quality . The clumsy studies they cite do not match the claims made for them, however.

SBAC and PARCC cannot be validated for their purpose of predicting college and career readiness until data are collected in the years to come on the college and career outcomes of those who have taken the tests in high school. The study cited by Common Core proponents uses the words “predictive validity” in its title. Only in the fine print does one discover that, at best, the study measured “concurrent” validity—high school tests were administered to current rising college sophomores and compared to their freshman-year college grades. Calling that “predictive validity” is, frankly, dishonest.

It might seem less of a stretch to validate SBAC and PARCC as high school exit exam replacements. After all, supposedly they are aligned to the Common Core Standards so in any jurisdiction where the Common Core Standards prevail, they would be retrospectively aligned to the high school curriculum. Two issues tarnish this rosy picture. First, the Common Core Standards are subjectively narrow, just mathematics and English Language Arts, with no attention paid to the majority of the high school curriculum.

Second, common adherence to the Common Core Standards across the States has deteriorated to the point of dissolution. As the Common Core consortia’s grip on compliance (i.e., alignment) continues to loosen, states, districts within states, and schools within districts are teaching how they want and what they want. The less aligned Common Core Standards become, the less valid the consortium tests become as measures of past learning.

As for technical quality, the Fordham Institute, which is paid handsomely by the Bill & Melinda Gates Foundation to promote Common Core and its consortia tests, published a report which purports to be an “independent” comparative standards alignment study. Among its several fatal flaws: instead of evaluating tests according to the industry standard Standards for Educational and Psychological Testing, or any of dozens of other freely-available and well-vetted test evaluation standards, guidelines, or protocols used around the world by testing experts, they employed “a brand new methodology” specifically developed for Common Core and its copyright owners, and paid for by Common Core’s funders.

Though Common Core consortia test sales pitches may be the most disingenuous, SAT and ACT spokespersons haven’t been completely forthright either. To those concerned about the inevitable degradation of predictive validity if their tests are truly aligned to the K-12 Common Core standards, public relations staffs assure us that predictive validity is a foremost consideration. To those concerned about the inevitable loss of alignment to the Common Core standards if predictive validity is optimized, they assure complete alignment.

So, all four of the test organizations have been muddling the issue. It is difficult to know what we are going to get with any of the four tests. They are all straddling or avoiding questions about the trade-offs. Indeed, we may end up with four, roughly equivalent, muddling tests, none of which serve any of their intended purposes well.

This is not progress. We should want separate tests, each optimized for a different purpose, be it measuring high school subject mastery, or predicting success in 4-year college, in 2-year college, or in a skilled trade. Instead, we may be getting several one-size-fits-all, watered-down tests that claim to do all but, as a consequence, do nothing well. Instead of a skilled tradesperson’s complete tool set, we may be getting four Swiss army knives with roughly the same features. Instead of exploiting psychometricians’ advanced knowledge and skills to optimize three or more very different types of measurements, we seem to be reducing all of our nationally normed end-of-high-school tests to a common, generic muddle.

————

References

McQuillan, M., Phelps, R.P., & Stotsky, S. (2015, October). How PARCC’s false rigor stunts the academic growth of all students. Boston: Pioneer Institute. http://pioneerinstitute.org/news/testing-the-tests-why-mcas-is-better-than-parcc/

Nichols-Barrer, I., Place, K., Dillon, E., & Gill, B. (2015, October 5). Final Report: Predictive Validity of MCAS and PARCC: Comparing 10th Grade MCAS Tests to PARCC Integrated Math II, Algebra II, and 10th Grade English Language Arts Tests. Cambridge, MA: Mathematica Policy Research. http://econpapers.repec.org/paper/mprmprres/a2d9543914654aa5b012e4a6d2dae060.htm

Phelps, R.P. (2016, February). Fordham Institute’s pretend research. Policy Brief. Boston: Pioneer Institute. http://pioneerinstitute.org/featured/fordhams-parcc-mcas-report-falls-short/
Reference

American Educational Research Association (AERA), American Psychological Association (APA), and the National Council on Measurement in Education (NCME). (2014). Standards for Educational and Psychological Testing, Washington, DC: AERA.

Some Common Core Salespersons’ Salaries: DC Edu-Blob-ulants

Linked are copies of Form 990s for Marc Tucker’s National Center for Education and the Economy (NCEE), Checker Finn’s Fordham Foundation and Fordham Institute, and Bob Wise’s Alliance for Excellent Education (AEE). Each pays himself and at least one other well.

All non-profit organizations with revenues exceeding $50,000 must file Form 990s annually with the Internal Revenue Service. And, in return for the non-profits’ tax-exempt status, their Form 990s are publicly available.

As to salaries…

National Center for Education and the Economy
NCEE2013Form990 – Marc Tucker pays himself $501,087, and six others receive from $162k to $379k (p.40 of 48); his son, Joshua Tucker receives $214,813 (p. 42)
…also interesting: p.16 (contrast with p.15), pp. 19, 27, 37

Alliance for Excellent Education
AEE2013Form990 – Bob Wise pays himself $384,325, and six others receive from $162k to $227k. (see p.27 of 36)
…also interesting: p.24 (“Madoff Recovery”)

Thomas B. Fordham Foundation & Institute
FordhamF2013Form990 & FordhamI2013Form990 – With both a “Foundation” and an “Institute”, Checker Finn and Mike Petrilli can each pay themselves about $100k, twice. (see p.25 of 42)
…also interesting: p.19 ($29million in investments; $1.5million for an interest rate swap); p.37 (particularly the two entries for “Common Sense Offshore, Ltd.)

How the USED has managed to get it wrong, again

https://www.washingtonpost.com/news/answer-sheet/wp/2016/02/03/dad-my-state-now-requires-11th-graders-to-take-the-sat-not-my-daughter/

An interesting dilemma. Common Core’s writers planned for a grade 11 test that would tell us whether or not students were college and career ready. Parents and state legislators don’t know who sets the cut score, what test items are on it, and what exactly a passing score on a college readiness test means, academically. Yet, all those who pass and enroll in a post-secondary educational institution are entitled to credit-bearing coursework in their freshman year.

So, why should most students wanting to go to a public college take a college admissions test, such as the ACT or SAT? No need to waste time and money for another and unnecessary test that is also “aligned to” Common Core, we are told.

But, that means the SAT and ACT companies lose a lot of money. So, what does the USED do to try to make sure they don’t lose money? It tells states that instead of a Common Core-based test in grade 11, they can require the SAT or ACT for all students for “federal accountability.” Almost a dozen states have fallen for this idiotic idea.

It turns out that an increasing number of colleges are no longer requiring SAT or ACT scores. http://news.yahoo.com/heres-happened-school-made-sats-202000551.html Why? Among other reasons, the tests can no longer tell them much about success at post-secondary institutions where all students are entitled to credit-bearing courses in their freshman year if they pass a grade 11 Common Core-based test —and can’t be given a placement test to determine remediation level. Some public college presidents or administrators in the state have already agreed to that on the state’s application for Race To The Top (RTTT) funds. Since then, more have. God help the freshman course instructor who doesn’t pass students who were declared college-ready to begin with.

Nor can the tests tell the colleges whether or not the students know much about whatever they studied in K-12. Why? The tests were developed to serve as college admissions tests to predict success in college, not as high school achievement tests. According to some math teachers, they contain material (some Algebra II, trigonometry items) that students haven’t been taught in a Common Core-based curriculum and don’t assess everything important that has been taught.

Worse yet, USED seems to want states to eliminate all other tests—the non-Common Core-based tests possibly including teacher-made tests (on the grounds of getting rid of excessive testing)—and to make passing a grade 11 college and career ready test all that is required for a high school diploma (the requirements might include course titles whose content is presumably addressed by Common Core standards, such as English, Algebra I, and Geometry). Almost everyone will have to be passed or there will be an uproar from the parents of low-achieving students. (Their writing is no longer required by the SAT.)

States adopted Common Core because they believed it would be the silver bullet that made all students college and career ready. If they also believe that all students declared college and career ready are thereby qualified to take credit-bearing coursework in post-secondary education, how can they not give a high school diploma to anyone who passes the grade 11 test? Even if they don’t know what’s on it, who set the cut score and determined who should pass, and what passing the test really means academically. The SAT and ACT are private companies and are not obligated to release any information they don’t want to release.

Who cares if all or most kids don’t want to go to college? Who cares what’s on the tests given in grade 11? All that matters is that the state has met what is required for federal accountability and will get ESSA funds and other money to give its K-12 schools, while it taxes those who can still afford to pay taxes for the increasing costs for less and less teaching and learning. Graduate schools may not care since they will be able to find enough tuition-paying qualified students from other countries.