| |
|
The State Education Department
Office of Higher & Professional Education
OFFICE OF RESEARCH & INFORMATION
SYSTEMS
POLICY & ISSUE PAPERS
|
|
PERFORMANCE REPORTING IN HIGHER EDUCATION
IN THE NATION AND NEW YORK STATE
BRIEFING PAPER
for the Regents Committee on Higher and Professional Education
September 1996
Table of Contents
Introduction
Part One: National Trends
Part Two: The New York Context
List of Tables
References
Attachments
Acknowledgements
This paper was made possible by many individuals and organizations from New York and
across the country who generously shared information and insight. We gratefully
acknowledge their contributions. All errors are the responsibility of the New York State
Education Department.
Introduction
Since the 1980s there has been nationwide interest in improving undergraduate education
and making higher education more accountable to the public. The "assessment"
movement of the 1980s permitted colleges and universities to assess themselves according
to their own criteria, standards and procedures. It was followed in the 1990s by the
"accountability" movement, which focuses on external audiences and requires the
use of common, comparable indicators of performance that reflect student concerns and
public policy goals. By 1996, more than half the states were developing or had issued a
public report on the performance of their higher education systems.
New York has traditionally achieved state policy goals for its diverse system of 251
public and private higher education institutions through Regents oversight and
coordination combined with tax support for institutions, students and special purpose
programs. In 1993 an independent Regents Commission on Higher Education recommended the
addition of public performance reports. It recommended that strategic goals be set for the
statewide system as a whole and for each institution. It also recommended that the Regents
and institutional trustees issue regular reports on progress being made toward their
goals. In 1994, in the spirit of the Commission's recommendations, the State University of
New York issued a systemwide performance report. By the spring of 1996, the Senate
approved a bill, S. 7231, that would have required the Commissioner of Education to
publish an annual "report card" on higher education. Although no legislation was
passed, that same spring the Regents made a higher education performance report a key
initiative in their strategic plan for the Education Department and in their draft
1996-2004 plan for the higher education system.
This paper provides background information about higher education performance reporting in
the U.S. and New York. The first part of the paper describes the status of the rapidly
emerging accountability movement, summarizes lessons learned in other states, and provides
examples of performance indicators in use. The second part of the paper describes recent
interest in performance reporting in New York and the current climate for the Regents
performance reporting initiative.
Part One
National Trends
Since the 1980s, there has been growing interest in making higher education more
accountable to the public for the resources it uses and the results it achieves.
Government leaders have been seeking information about student and institutional
performance to help them make tough choices about the uses of public funds. Similarly,
students and their families -- facing rising tuition, increased educational debt and
uncertain labor markets -- have been seeking information to help them make difficult
decisions about where to go to college and what to study.
In this environment, the traditional "input" indicators of college
"quality," such as faculty credentials or the number of books in the library,
have been replaced by information about "outputs" and "outcomes."
Those outside the academy want to know the answers to such questions as: What
instructional and support services do students receive? What skills and knowledge do
students gain? What percent of students graduate? How long does it take them to graduate?
Are graduates satisfied with their educations? What percent of graduates go on to further
study or employment? What do graduates earn? Are employers satisfied with the skills and
knowledge college graduates bring to their jobs?
Outcomes
assessment
Responding to reports of declining national economic competitiveness and educational
standards, in 1986 the National Governor's Association called for:
"every college and university ... to implement systematic reforms that use
multiple measures to assess undergraduate student learning. The information gained from
assessment should be used to evaluate institutional and program quality. Information about
institutional and program quality should also be made available to the public."
Higher education's response was the "assessment" or "outcomes
assessment" movement. It focused on the measurement of educational achievement in
college students and was characterized by institutional choice in matters of measurement,
public disclosure, and use of results. Appropriately, institutions used a variety of
assessment methods, such as nationally-normed tests, state-developed tests,
locally-developed tests, student grades, student portfolios, and student projects or
"capstone" experiences. Because the assessment movement was
"institution-centered," it did not generate comparable information for different
institutions or information that could be aggregated to state and national summaries.
By 1994, nearly all institutions claimed to engage in formal assessment activities
(El-Khawas, 1995). By the end of 1995, approximately two-thirds of the public higher
education systems in the U.S. required formal assessment of undergraduate student
learning, as shown in Table 1. About half the states had "institution-centered"
programs. Seven public systems, in Arizona, Georgia, Florida, South Dakota, Tennessee,
Texas, and Wisconsin, required their institutions to use common measures of student
learning, and another six states were considering common measures or nationally-normed
tests (Ewell, 1996).
Table 1.
Status of Undergraduate Assessment Initiatives
in Public Higher Education
in the States in 1995 |
|
States
|
Assessment initiatives
for undergraduate learning |
#
|
%
|
Institution-centered:
Local development and use |
28
|
56%
|
Common measures |
7 |
14% |
Program review |
2 |
4% |
None |
10 |
20% |
Unknown |
3 |
6% |
SOURCE: NCES survey reported in Ewell, 1996. |
In a recent survey, sixty-one percent of state higher education boards
indicated that common measures of student learning should be used, and an additional
twenty-one percent gave their qualified approval to such measures (Steele and Lutz, 1995).
To some, however, assessment has become a "back burner" issue that has been
subsumed by more comprehensive state reforms. The following obstacles to assessment's
further development were recently identified (Ewell, 1996):
1. A shift of public concern from improving higher education to increasing its
efficiency;
2. Policy doubts about the real utility of results of learning assessments in guiding
policy or discharging accountability;
3. Inadequate funding;
4. Institutional resistance, especially among faculties and at research universities;
5. Lack of authority on the part of state agencies to undertake assessment;
6. Excessive diversity in postsecondary educational settings, with respect to mission
differences and students served;
7. Lack of appropriate instruments and high costs of developing local assessment
instruments;
8. Lack of federal and state leadership in developing common indicators and
definitions;
9. Lack of faculty and community consensus about learning domains to be assessed; and
10. A range of program implementation problems, including lack of student motivation
to perform on nonrequired tests, lack of faculty and state agency staff training, and
questions about data reliability.
Public accountability
In the 1990s, as federal and state leaders faced tougher choices about
where to spend public funds, higher education came under the same scrutiny as other public
programs and was increasingly asked to demonstrate performance with common measures that
would permit state and national aggregations and comparisons. At the national level,
performance measurement was required in several pieces of legislation, shown in Table 2,
and its implications for state and federal policy were widely debated (Hauptman and
Mingle, 1995). To advance common measures, the National Center for Education Statistics
undertook a variety of initiatives, including a nationwide graduation rate survey and a
national test of college level skills, which was aborted for lack of funds.
Table 2. Federal Laws with
Performance Indicators
for Higher Education |
Law |
Purposes of Indicators |
The Student Right-to-Know and Campus Security Act (1990) |
Public disclosure of crime and graduation rates |
Perkins Vocational and Applied Technology Education Act (1990) |
Improvement of completion and job placement rates |
Program Integrity Triad provisions in the Higher Education Act (1992) |
Focus accreditation on student outcomes |
Goals 2000: Educate America Act (1994) |
Systemic improvement |
In 1995, Governor Roy Romer of Colorado, then Chairman of the Education Commission of
the States (ECS), urged the states to measure institutional performance as they sought to
improve higher education.
"Because higher education is so important to our future well-being, our
investments in colleges and universities must pay high returns for both individuals and
society as a whole. It seems clear to me that as state leaders and as parents, we have the
responsibility to take steps to insure our institutions of higher learning meet our needs
and expectations.
Increasingly, ...both the public and state leaders are expressing concerns about the
quality and effectiveness of their higher education institutions. The concerns reflect a
wide range of issues, including: increasing costs, declining access, large class sizes,
lack of course offerings, and reduced productivity and faculty workload.
Agreeing on what we mean by quality and measuring this quality at various institutions
is essential to how we address issues of cost, access and the effectiveness of teaching
and learning...We need to be clear about what we value in higher education so we can act
on those values...
...we cannot pretend that government at any level can mandate or regulate real quality
in higher education, and we must recognize the great diversity in the mission and
organization of colleges and universities. Our task is to shape the context -- including
the public policies and mechanisms of self-regulation -- in which they operate, so that
market forces, incentives, public investments and accountability mechanisms strengthen and
enhance quality."
Accountability to students is considered to be as important as accountability to public
officials. The results of recent ECS focus groups, which are consistent with other student
interest studies, reveal four areas of student concern (ECS, 1995).
Individual outcomes. The bottom line for students is the return they are
likely to obtain from investing in a college education. What are the chances of getting a
degree? How long will it take? What is the credential worth in the marketplace? Are
additional credentials or licenses required? What are the chances of getting a job? What
kinds of jobs are associated with particular majors or degrees? What salaries can be
expected, right after graduation and in the long term? How will a college make skill
enhancements available to its graduates so they can keep pace in the future job
environment?
Key experiences. Students are also concerned about the college experience
itself. Is there out-of-class contact with faculty? Is there active classroom learning
that involves doing rather than just listening? Are there opportunities for internships?
Are there chances to acquire practical skills, such as computer skills?
Support services. All students, and particularly students who commute,
attend part-time and have family obligations, are concerned about basic administrative and
support services. Is there support for overcoming skill deficiencies? Is there adequate
academic advisement? Are career counseling, personal counseling, and child care available?
Are services delivered during hours when nontraditional students can use them? Are there
understandable and efficient "customer-oriented" administrative services,
including admissions, registration, financial aid administration, bursar and fee-payment,
and access to academic records? Or, is student time regarded as a "free good"
that can be wasted?
Costs. Cost concerns are important for almost all students. Besides the
monetary costs of tuition and fees and books and so on, students are concerned about the
"opportunity costs" associated with college attendance, recognizing that their
own investment of time, money and energy is substantial and should not be squandered.
Performance Reporting in the States
In 1995, about one-third of the states had issued or were planning to
issue a higher education performance report, mostly for public institutions (Ewell, 1996).
By August 1996, more than half of the states were involved and about two-fifths were
linking or planning to link performance to funding in some way, ranging from the simple
publication of a performance report to meeting performance goals, as shown in Table 3.
Some of the state reports have been mandated by legislatures, while others were initiated
by coordinating or governing boards. The reports have a variety of formats. Some reports
have only system-level information, some have only institution-level detail, and some have
both. In some states, reports are issued by a central board. In other states, each public
institution is responsible for issuing its own report in accordance with state guidelines
for form and content. Only six states release institution-level performance information
for more than several dozen institutions. They are Florida (37), Illinois (62), North
Carolina (188, a statistical abstract only), Tennessee (59), Texas (97). and Virginia (40)
.
Table 3.
States with Higher Education Performance Reports Issued or
Planned as of 1996 (N = 32) |
State |
Institutions Included
# Types |
Link to funding? |
Lowest level of detail
for public release |
AL |
16 |
Public |
Yes |
Institutions |
AR |
32 |
Public |
Yes |
Institutions |
AZ |
3 |
Public |
|
Universities |
CA |
287 |
Public & independent |
|
Systems |
CO |
28 |
Public |
Yes |
1995: Types of Institutions
By 1997: Institutions |
CT |
24 |
Public |
Yes |
Institutions |
FL |
37 |
Public |
2-yr colleges |
Institutions |
GA |
34 |
Public |
Yes |
Institutions |
HI |
11 |
Public |
|
Institutions |
ID |
6 |
Public |
Yes |
Institutions |
IL |
62 |
Public |
|
Institutions |
IN |
16 |
Public |
|
1984-89: Institutions; 1989-1996: System;
1997: New Emphasis on Institutions |
KS |
6 |
Public |
Yes |
Institutions |
KY |
22 |
Public |
Yes |
Institutions |
LA |
20 |
Public |
Yes |
Institutions |
MD |
13 |
Public |
|
Institutions |
MI |
28 |
Community Colleges |
|
Institutions |
MN |
30 |
Public |
Yes |
Institutions |
MO |
25 |
Public |
Yes |
Institutions |
NJ |
45 |
Public & independent |
|
Independent: System; Public: Institutions |
NM |
23 |
Public |
|
Institutions |
NY |
250 |
All degree-granting |
|
1994: Institutional Type (SUNY Only) 1998: Institutions (All degree-granting) |
NC |
118 |
Public & independent |
|
Institutions: Statistical Abstract Only |
OH |
24 |
Public |
Yes |
Institutions |
SC |
33 |
Public |
Yes |
Institutions |
RI |
3 |
Public |
Yes |
Institutions |
TN |
59 |
Public and independent |
Yes |
Institutions |
TX |
97 |
Public |
Yes |
Institutions |
UT |
9 |
Public |
|
Institutions |
VA |
40 |
Public |
Yes |
Institutions |
WV |
16 |
Public |
Yes |
Institutions |
WI |
26 |
Public |
Yes |
Institutions |
SOURCES: McGuinness, 1994;
Ruppert, 1994; Layzell and Caruthers, 1995; Ewell, 1996; Schmidt, 1996; McKeown, 1996;
NYSED, 1996, Chronicle of Higher Education Almanac (September 2, 1996). |
Purposes of performance reporting.
There are nine common purposes for higher education performance reporting in the states
(Ruppert, 1995; Epper, 1994). They include:
1. increasing legislative and public support for higher education;
2. helping to allocate public funds (through incentive- or performance-based funding);
3. monitoring the general condition of higher education;
4. identifying potential sources of problems or areas for improvement;
5. improving the effectiveness and efficiency of colleges and universities;
6. focusing college and university efforts on state policy goals;
7. assessing progress on state priorities and goals;
8. improving undergraduate education; and
9. improving consumer information and market mechanisms.
Effectiveness. Has performance
reporting been effective? The consensus is that there have been both positive results and
lingering uncertainties (Ruppert, 1995). The persistence of performance reporting in the
states that adopted it early and its steady adoption in more states each year suggest that
policy makers see it as a valuable tool. But there is no compelling evidence that
performance reporting by itself achieves state purposes. Rather, performance reporting
seems to be a useful tool when it is combined with statewide goal setting, planning, and
other quality assurance mechanisms such as "quality audits" and peer reviews
(Gaither, Neal, et. al, 1994; Gaither, 1995, Ruppert, 1995). Performance indicators have
successfully focused institutional efforts on state policy goals when a small portion of
state funding has been tied to performance (Gauthier, 1996; Sugarman, 1996; Schmidt,
1996). The flip side of the coin is that when performance indicators have been used
without clear purposes and consequences, they have been viewed by institutions as
"inappropriate harassment" (Zikmund, 1996).
"Best practices." Much of
the practitioner literature on state level performance reporting includes sections on
"lessons learned" from experience (Ewell, 1994; Jones and Ewell, 1994; Ruppert,
1995; Sanders et.al, 1996; Stein, 1996; Burke, 1996; Ewell, 1996). The essence of those
lessons is listed below.
1. Integrate performance reporting with other state level policy tools, such as
planning or budgeting, and make sure everyone knows in advance what its purposes are.
2. Use indicators as a way to improve higher education, not threaten it.
3. Select indicators as "signals" of progress toward important state
policy goals.
4. Involve the higher education community, its students and its stakeholders
(government and employers) in the development of indicator systems.
5. Be sensitive to differences in institutional missions and clienteles. Consider
using "core" performance indicators for all institutions, supplementary
indicators for different types of institutions, and "input" or
"context" indicators to show how institutions differ.
6. Use a limited set of indicators. The public tends to focus on a few key
indicators no matter how many are used.
7. Focus on effective communication to the public. Explain what each indicator
means and why it is important. Also explain data limitations.
8. Use "benchmarks" to make indicators meaningful. Benchmarks can be past
performance or the performance of appropriate others.
9. Minimize technical complexity and cost. Use indicators and benchmarks
that are valid, reliable and statistically robust under conditions of
imperfect data. Use available statistics to the extent possible. Involve
statistical experts early on.
Best practice guidelines have sometimes been difficult to follow. For
example, although indicators should be selected because they are valid,
reliable and robust statistics that are policy related, the reality is
that indicators are often selected because they can be computed with available
data, and, therefore, a few indicators in use are of questionable validity
(Gaither, Nedwick and Neal, 1994).
Indicators in use.
Literally hundreds of different performance indicators are in use in higher
education and the mass media in the U.S. and western Europe for audiences
that include government officials, institutional trustees and officers,
faculty, and prospective students and their parents. In a recent study
of ten states using performance indicators for their public systems, the
following were found to be the most common (Ruppert, 1995):
1. Regular admissions standards and comparisons of entering students to those
standards; 2. Remediation activities and indicators of remedial effectiveness;
3. Enrollment, retention and graduation data by gender, ethnicity and program;
4. Total student credit hours produced by institution and discipline;
5. Transfer rates to and from two-year and four-year colleges;
6. Total degrees awarded by institution and program and time to degree;
7. Pass rates on professional licensure exams;
8. Placement data on graduates;
9. Results of follow-up satisfaction studies of alumni, students, parents, and
employers;
10. Faculty workload and productivity data;
11. Number and percentage of accredited and eligible programs; and
12. External or sponsored research funds.
Performance indicators tend to cluster around broad state policy goals. Five common
clusters and sample indicators are shown below (Ruppert, 1995).
Educational Quality and Effectiveness
This category emphasizes inputs, process and outputs related to undergraduate
teaching and learning. Example indicators are ACT and SAT scores of entering freshmen,
number of freshmen meeting state admissions standards, number of students in remediation,
effectiveness of remedial instruction, availability of academic programs, amount of
financial commitment to instruction, student-faculty ratios, class size, percent of
students taking at least one course with fewer than 15 students, student assessment
results, student performance on nationally-normed exams, type of faculty teaching
lower-division courses, time to degree completion, course demand analysis, graduation
rates, performance of graduates on licensure exams, job placement rates, graduate and
employer satisfaction, number of degrees awarded by discipline, and number of degrees
granted.
Access-Diversity-Equity
This category is related to higher education's ability to accommodate changing
demographics and the changing needs of the student population. Examples of indicators are
persistence and graduation rates by ethnicity, availability of financial aid, faculty
diversity, college participation rates, progress in affirmative action, and student
demographics.
Efficiency and Productivity
This category refers to how well and at what cost a campus, system or state addresses
its particular goals or priorities. Examples include program costs, time to degree and
number of credits by institution and degree, classroom and laboratory utilization, charges
to students, state appropriations per capita and per resident student, total contact hours
of instruction by faculty rank, facilities maintenance, average faculty salary, and
student-faculty ratios.
Contribution to State Needs
This category refers to state policy makers' increasing concerns about workforce
development and economic competitiveness issues. Examples include relation of programs to
employer needs, number of graduates in critical employment fields, economic impact on the
state, continuing education patterns, and employer ratings of "responsiveness."
Connection and Contribution to Other Education Sectors
This area reflects the desire to see the educational system as a whole. Examples of
indicators in use are effectiveness of remedial education, feedback on performance to high
schools, and research and service in support of K-12.
A recent article classified more than two hundred indicators of institutional inputs,
processes and outputs into twenty-two topic areas (Bottrill and Borden, 1994). That
article appears as Attachment 1 and has check marks next to indicators that could be
computed with data currently available to the New York State Education Department.
Information resources and needs
In 1995, thirty-nine states had databases with student-level records for one or more of
their public higher education systems (Russell, 1995). For the purposes of performance
reporting, student-level records with social security numbers are preferable to
institution-level records because they are more flexible and because they enable higher
education systems to report back to high schools on student progress, to track transfers
from one institution to another, and to track graduates into the labor market with minimal
administrative burden (Stein, 1996).
A persistent problem for all the states engaged in performance reporting has been the
scarcity of good national benchmarks. The states' solution has generally been to use the
best benchmarks available and to note their limitations.
Part Two
The New York Context
Education Law gives the Board of Regents responsibility for state-level planning,
quality assurance and public accountability in New York's large and diverse system of
public and private higher education. For those purposes, the Regents and the Education
Department review institutions and their educational programs, issue statewide plans for
higher education, and make extensive statistical information available to the public. The
Regents and Department have issued detailed performance reports for state and federal
categorical programs under their jurisdiction, but they have not used performance reports
for general institutional accountability and improvement.
New agenda
Over the past several years, there has been growing interest in public performance
reporting for higher education in New York. In 1993, the Regents Commission on Higher
Education called for the development of statewide strategic goals and performance
indicators and regular reporting on progress toward the goals. The Commission also called
upon every institution to set its own strategic goals and publicly report on its progress.
In 1994, the State University of New York, in the spirit of the Commission's
recommendations, issued a systemwide performance report (Burke, 1996). In the same year,
the "New York State Government Performance and Results Act," which would have
required extensive performance reporting by all state agencies, including the State
University and City University, was introduced in the Assembly. By the spring of 1996, the
State Senate had approved S. 7231, a bill that would have required the Commissioner to
issue a "report card" on higher education institutions every year, starting in
1998 (Attachment 2). The same bill was introduced in the Assembly but was not reported out
of committee. Even without a legislative mandate, the Regents 1996 strategic plan
announced that the Regents would set goals for higher education by 1997 and issue a
performance report by 1998.
Community reactions
The Regents and Education Department sought public comment on their performance
reporting initiative by widely circulating their 1996 strategic plan and their March 1996
draft of a 1996-2004 statewide plan for higher education. In addition, Department staff
have been meeting with leaders from higher education and government to solicit comments.
Community reactions to performance reports for each of New York's 251 institutions have
been mixed. Support, contingent on implementation details, has come from the Governor's
office, the Division of the Budget, the State Senate, SUNY's central offices, other state
agencies, and some in the K-12 education community. Reasons for support include interest
in:
- giving students information to guide their college investment decisions;
- giving policy makers information to guide the allocation of public funds;
- identifying areas that need improvement so that steps can be taken to keep New York
competitive;
- having a fair and level playing field of performance information for all sectors that
compete for state and federal tax support; and
- building statewide capacity to assess the performance of all providers of postsecondary
education and training for the purpose of improving job training and economic development.
The independent sector and the higher education leadership in the Assembly do not at
the present time support the publication of institution-level performance reports. Some
independent sector leaders have expressed the view that Regents program approval, in
combination with accreditation, trustee oversight, alumni feedback and donor support,
already provide sufficient public accountability. Overall, concerns with institutional
reports are that they could:
- be misleading;
- threaten state funds to institutions serving underprepared and disadvantaged students;
- increase administrative burden on institutions while adding no value;
- intrude on the academic decisions of faculty; and
- weaken the market position of weak institutions, making it more difficult for them to
attract students and improve their programs.
Both supporters and detractors of performance reporting offered suggestions about how
to implement it. In general terms, they included the following.
A systemwide report with sector-level information would be preferable to institution-level
reports.
It would be prudent to phase in institutional reports to test their feasibility and
utility. Suggested starting points have been public institutions (because they rely more
than other institutions on public funds) and two-year colleges (because, as a group, they
are more homogeneous than other types of institutions with respect to mission and
clientele).
It would be prudent to use performance reports in the Department's oversight of
institutions and their programs before disclosing performance information to the public.
This would give institutions a chance to improve.
The success of performance reporting will hinge on how well institutional differences are
taken into account and communicated to the public. It would not be appropriate to issue
reports that rank all institutions without reference to differences in their missions and
students served.
Information resources
Information resources are obviously essential for performance reporting. A wealth of
information exists in New York, but it is not necessarily coordinated or in a form that
would permit the construction of many performance indicators commonly used in other
states. The State University and City University each have sectorwide information systems
that enable them to track the progress of individual students within their own systems,
among other things, but their systems are not linked to one another. Neither the
independent sector nor the proprietary college sector -- which together have about forty
percent of the state's enrollment -- has a coordinated student information system. The net
result is that, compared to other states, New York lacks the capacity to track the flow of
students within its higher education system.
One of the recommendations in the 1993 report of the Regents Commission on Higher
Education was to create a statewide system of student records that would permit the
Regents to have better information about the flow and achievements of students. Although
Department staff drafted proposals for such a system, lack of funding and concerns about
privacy and centralized records has prevented progress from being made.
The State Education Department does have extensive institution-level information for every
degree-granting institution in its Higher Education Data System, known as HEDS. HEDS data
comes from surveys of institutions that are required by the National Center for Education
Statistics and, in a few instances, by the State Education Department. If new data
elements were needed for performance reporting and HEDS were the vehicle for obtaining
them, then new HEDS surveys would have to be submitted by institutions or existing surveys
would have to be augmented.
There may be other information resources that could be brought to bear on performance
reporting without increasing institutional reporting burden. For example, the New York
State Higher Education Services Corporation and the U.S. Department of Education have
extensive student-level information about recipients of state and federal student aid.
This group represents as much as four-fifths of the total New York State enrollment,
according to the Corporation. Other resources may be available from private sources.
Tables
Table 1. Status of Undergraduate Assessment Initiatives in Public
Higher Education in the States in 1995 3
Table 2. Federal Laws with Performance Indicators for Higher Education
4
Table 3. States with Higher Education Performance Reports Issued or Planned
as of 1996 7
References
Adelman, Clifford, Ed., Assessment in American
Higher Education, OERI, U.S. Department of Education, 1986.
Alabama S.J.R. 32, a bill creating the Higher Education Funding Advisory Commission to
develop a new funding approach for higher education that is performance based (adopted as
amended 5/20/96).
American Association of Community Colleges (AACC), Community Colleges:
Core Indicators of Effectiveness: A Report of the Community College Roundtable, 1994.
Associated Press, "Magazines enter college-guide business," Albany
Times Union, August 6, 1996.
Barton, Paul E. and Archie LaPointe, Learning by Degrees: Indicators
of Performance in Higher Education, Education Testing Service, Princeton, NJ, 1995.
Bottrill, Karen V. and Borden, Victor M.H., Appendix: Examples from the
Literature," in New Directions for Institutional Research, No. 82,
Jossey-Bass Publishers, Summer 1994.
Burke, Joseph C., "Performance Reporting in SUNY: An Imperfect
Beginning, an Uncertain Future" in Gaither, Gerald, Ed., Performance Indicators
in Higher Education: What Works, What Doesn't, and What's Next?, Gerald H. Gaither,
Ed., Texas A&M University System, June 1996.
Burke, Joseph C., "Performance Reporting in SUNY: An Imperfect
Beginning, an Uncertain Future," in Developing a Workable System of Performance
Indicators, Society for College and University Planning Pre-Conference Workshop
Materials, July 1996.
Burke, Joseph C., Performance Reporting in SUNY: Postscript and
Prospect, Rockefeller Institute, undated, unpublished paper.
California Assembly Bill No. 1808 (Chapter 741, Statutes of 1991).
California Postsecondary Education Commission, Performance Indicators
of California Higher Education, 1995, Report 96-2.
Chamberlain, Don and Van Daniker, Relmond P., "Colleges and
Universities," in Harry P. Hatry et. al., Service Efforts and Accomplishments
Reporting: Its Time Has Come, Governmental Accounting Standards Board, 1991.
Chaney, Bradford and Farris, Elizabeth, Survey on Retention at Higher
Education Institutions, Higher Education Survey Reports, National Science Foundation,
The National Endowment for the Humanities and the U.S. Department of Education, November
1991.
Cole, William, "By Rewarding Mediocrity We Discourage
Excellence," Chronicle of Higher Education, January 6, 1993.
Colorado House Bill 96-1219, Higher Education Quality Assurance Act.
Colorado Commission on Higher Education, Scorecard on Colorado Public
Higher Education, March 1995.
Commission on Higher Education, Middle States Association of Colleges and
Schools (CHE/MSACS), Frameworks for Outcomes Assessment, 1996.
Connecticut Department of Higher Education, Profiles of Connecticut
Colleges and Universities, 1992.
Education Commission of the States (ECS), Connecting Learning and
Work: A Call to Action, August 1996.
Education Commission of the States (ECS), Making Quality Count in
Undergraduate Education, October 1995.
El-Khawas, Elaine, Campus Trends, 1995, American Council on
Education, July 1955.
Epper, Rhonda Martin, Ed., Focus on the Budget: Rethinking Current
Practice, State Higher Education Executive Officers, May 1994.
Ewell, Peter T., "The Current Pattern of State-Level Assessment:
Results of a National Inventory," in Performance Indicators in Higher Education:
What Works, What Doesn't, and What's Next?, Gerald H. Gaither, Ed., Texas A&M
University System, June 1996.
Ewell, Peter T., "A Matter of Integrity: Accountability and the
Future of Self-Regulation," Change, November/December 1994.
Ewell, Peter T., "Feeling the Elephant: The Quest to Capture
'Quality'," Change, September/October 1992.
Ferris, Vera King, Final Report of the Committee on Excellence and
Accountability Reporting, New Jersey President's Council Committee on Excellence and
Accountability Reporting, June 1995.
Gaither, Gerald, Ed., Performance Indicators in Higher Education: What
Works, What Doesn't, and What's Next?, Gerald Texas A&M University System, June
1996.
Gaither, Gerald, Ed., Assessing Performance in an Age of
Accountability: Case Studies, Jossey-Bass Publishers, Fall 1995.
Gaither, Gerald, "Some Observations While Looking Ahead," in
Gaither, Gerald, Ed., Assessing Performance in an Age of Accountability: Case Studies,
Jossey-Bass Publishers, Fall 1995.
Gaither, Gerald, Nedwek, Brian P., Neal, John E., Measuring Up: The
Promises and Pitfalls of Performance Indicators in Higher Education, ASHE-ERIC Higher
Education Reports, 1994.
Gauthier, Howard, Ohio Board of Regents, telephone conversation, August
1996.
Georgia University System, 1994-95 Information Digest, February
1996.
Greenwood, Addison, National Assessment of College Student Learning:
Getting Started, National Center for Education Statistics, May 1993.
Hartle, Terry W., "The Growing Interest in Measuring the Educational
Achievement of College Students," in Adelman, Clifford, Ed., Assessment in
American Higher Education, OERI, U.S. Department of Education, 1986.
Hauptman, Arthur M. and Mingle, James R., Standard Setting and
Financing in Postsecondary Education: Eight Recommendations for Change in Federal and
State Policies, SHEEO, November 1994.
Indiana Commission for Higher Education, Indicators of Progress,
February 1992.
Indiana Commission for Higher Education, State-Level Performance
Objectives for Indiana Postsecondary Education, May 1, 1995.
Joint Commission on Accountability Reporting (JCAR), A Need Answered:
An Executive Summary of Recommended Accountability Reporting Formats,
AASCU-AACC-NASULG, 1995.
Jones, Dennis, "Designing State Policy for a New Higher Education
Environment," NCHEMS News, October 1994.
Jones, Dennis and Ewell, Peter, "Pointing the Way: Indicators as
Policy Tools in Higher Education," in Ruppert, Sandra S., Ed., Charting Higher
Education Accountability: A Sourcebook on State-Level Performance Indicators,
Education Commission of the States, June 1994.
Jones, Elizabeth A., National Assessment of College Student Learning:
Identifying College Graduates' Essential Skills in Writing, Speech and Listening, and
Critical Thinking (NCES 95-001), National Center for Education Statistics, 1995.
Katz, Stanley N., "Defining Education Quality and
Accountability," Chronicle of Higher Education, November 16, 1994.
Kentucky Council on Higher Education, Kentucky Higher Education
System: 1995 Annual Accountability Report Series,
Larson, Elizabeth, "Business pays price for a poor
education," Albany Times Union, July 14, 1996.
Korb, Rosalyn, Postsecondary Student Outcomes: A Feasibility Study,
U.S. Education Department, OERI, 1992.
Layzell, Daniel and Caruthers, J. Kent, Performance Funding for Higher
Education at the State Level, 1995.
Mahtesian, Charles, "Higher Ed: The No-Longer-Sacred Cow," Governing,
July 1995.
McGuinness, Aims C., et.al., State Postsecondary Education Structures
Handbook, Education Commission of the States, 1994.
McKeown, Mary, State Funding Formulas for Public Four-Year
Institutions, Paper presented to the 1995 Annual Meeting of the Association for the
Study of Higher Education, November 2, 1995.
National Center for Higher Education Management Systems (NCHEMS), A
Preliminary Study of the Feasibility and Utility for National Policy of Instructional
"Good Practice" Indicators in Undergraduate Education, National Center for
Education Statistics (NCES 94-437), August 1994.
National Governors' Association, Time for Results: The Governors' 1991
Report on Education, NGA Center for Policy Research and Analysis, August 1986.
New Jersey Commission on Higher Education, New Jersey's Renewable
Resource: A Systemwide Accountability Report, April 1996.
New Mexico Commission on Higher Education, 1994 Condition of Higher
Education in New Mexico.
New York State Education Department (NYSED), Office of Higher and
Professional Education, Research and Information Systems, e-mail survey of State Higher
Education Executive Officers and selected follow-up telephone interviews, June 1996.
New York State Regents Commission on Higher Education, Sharing the
Challenge, September 1993.
North Carolina, Statistical Abstract of Higher Education in North
Carolina, 1995-96, June 1996.
Ratcliff, James. L. and Associates, Realizing the Potential: Improving
Postsecondary Teaching, Learning and Assessment, The National Center on Postsecondary
Teaching, Learning and Assessment, August 1995.
Ruppert, Sandra S., Ed., Charting Higher Education Accountability: A
Sourcebook on State-Level Performance Indicators, Education Commission of the States,
June 1994.
Ruppert, Sandra S. "Roots and Realities of State-Level Performance
Indicator Systems," in Gaither, Gerald, Ed., Assessing Performance in an Age of
Accountability: Case Studies, Jossey-Bass Publishers, Fall 1995.
Russell, Alene Bycer, Advances in Statewide Higher Education Data
Systems, SHEEO, October 1995.
Sanders, Keith R., Layzell, Daniel and Boatright, Kevin, "University of Wisconsin
System's Use of Performance Indicators as Instruments of Accountability," in Gaither,
Gerald, Ed., Performance Indicators in Higher Education: What Works, What Doesn't, and
What's Next?, Gerald H. Gaither, Ed., Texas A&M University System, June 1996.
Schmidt, Peter, "Report Urges Reforms in Job Training," Chronicle
of Higher Education, August 1996.
Schmidt, Peter, "Earning Appropriations," Chronicle of
Higher Education, May 24, 1996.
Schmidt, Peter, "West Virginia Starts Sweeping Overhaul of Public
Higher Education," Chronicle of Higher Education, July 26, 1996.
South Carolina Commission on Higher Education, Guidelines for
Institutional Effectiveness, July 1, 1995.
South Carolina Commission on Higher Education, Minding Our
"P's" and "Q's": Indications of Productivity and Quality
in South Carolina's Public Colleges and Universities, January 1996.
State University of New York (SUNY), Performance Indicator Report,
1994.
Steele, Joe M. and David A. Lutz, Report of ACT's Research on
Postsecondary Assessment Needs, American College Testing, June 1995.
Stein, Robert E., "Performance Reporting/Funding Program: Missouri's
Efforts to Integrate State and Campus Strategic Planning," in Gaither, Gerald, Ed., Performance
Indicators in Higher Education: What Works, What Doesn't, and What's Next?, Gerald
Texas A&M University System, June 1996.
Sugarman, Roger P., Kentucky Council on Higher Education, correspondence,
1996.
Taylor, Barbara E., Meyerson, Joel W. and Massey, William E., Strategic
Indicators for Higher Education: Improving Performance, Peterson's Guides, 1993.
Tennessee Higher Education Commission, The Status of Higher Education
in Tennessee including the Sixth Annual Report on Progress toward the Goals of Tennessee
Challenge 2000 for the State's Public Higher Education Institutions and The Third Annual
Report on Contributions of the State's Independent, Regionally-accredited Higher Education
Institutions, 1996.
Trachtenberg, S. J. and Wise, Arthur E., "University Presidents and
Accreditation Agencies: Serving and Protecting Students and the Public," Chronicle
of Higher Education, advertisement, April 26, 1996.
Tucker, Marc, "Many U.S. Colleges Are Really Inefficient, High-Priced
Secondary Schools," Chronicle of Higher Education, June 5, 1991.
University of Hawaii, Benchmarks: Performance Indicator Report,
November 1995.
Zikmund, Joseph, State of Connecticut Department of Higher Education,
correspondence, 1996.
Attachments
Attachment 1. Examples from the Literature
An article containing over 200 higher education performance indicators in use in
the U.S. and Europe. Indicators with check marks are those that the New York State
Education Department could compute with currently available information.
Attachment 2. Senate bill S. 7231
DOWNLOAD
This report is a downlaodable file with a omplete set of the tables. To
download this file, click on the ICON proceeding the file name. (The downloadable
files are WP5.1 format with Self-extracting Archives to facilitate downloading.)
A Briefing Paper on Performance Reporting in Higher Education
[ SED
Home | OHE Home | State | Top
| Internet Links | Site Map | Feedback | Help ]
[ Return to ORIS Home Page ]
Last updated: 11/10/1996
URL: http://www.higher.nysed.gov/ohpe/oris/p_report.html
|