Conceptual Framework for State Analysis

Project Overview
State Self-Study Tools
State and Regional Policies
Assessment Policy Types and Models
Policy Development
Inventory of Instruments and Measurements
Data Collection and Analysis
Publications and Presentations


For the purposes of the case study, the framework had three components: one for the policy process, one for the synthesis of themes, and one for the analysis of findings. A diagram of the framework is presented in Figure 1. Based upon prior research of policy documents[1], a survey of SHEAQ administrators[2], and a review of the literature[3], we identified the important issues to consider when analyzing policy development to include the beliefs and expectations of state policymakers, the SHEAQ agency leadership, and the leadership of the colleges and universities regarding assessment and higher education performance. Also important are the relationships among these three types of actors, as well as the social, political, and economic contexts surrounding the policy process in the state.

The five state cases are presented in a process framework, which includes the inputs, processes, outcomes, and impacts of each state’s assessment policy. It incorporates an analysis of the context for the policy, including its historical and political inputs, its effects, how successful it was[4], as well as the policy type[5]. This framework guided our processes for data collection and individual case analysis.


Policy Process Component

The primary component of the conceptual framework is the examination of the stages in the development of assessment policy. A process model facilitated the description and analysis of the critical factors comprising the formation and enactment of each state's assessment policy. Palumbo illustrated that examining the development of the policy over time is critical because public policy is considered a process of government activity that takes place over many months and years rather than merely a single event, decision, or outcome. He also presented a five-stage model for the process of policy formation, outlining critical events that serve to move policies through each stage[6].

Figure 1: Conceptual Framework for State Analysis

Our model is adapted from the six stages outlined by Anderson and his colleagues, which presented stages using similar critical events, but also supplied the stages with descriptors[7]. Anderson’s original framework identified six stages in the policy process for any policy domain: (1) problem identification; (2) agenda setting; (3) policy formulation; (4) policy adoption; (5) policy implementation; and (6) policy evaluation. Because the “agenda setting” stage was found not to be applicable to the state higher education assessment policy domain, we did not include it as part of this analysis and interpretation. Our policy process model for higher education identifies five stages in the development process for state assessment policies:

Problem Formation: The period when the need for a state-level assessment policy was first recognized;

Policy Formulation:
Development of pertinent and acceptable proposed courses of action for dealing with public problems;

Policy Adoption:
Development of support for a specific proposal such that the policy is legitimized or authorized;

Policy Implementation:
Application of the policy to the problem;

Policy Evaluation:
Attempt by the state to determine whether the policy has been effective.

The case narratives for this component of the conceptual framework outline the policies in each of the states, and describe the events in their development, the dynamics surrounding them, and the influence of important policy actors on the final design. Also, the discussion examines how the policy moved from formulation to implementation, obstacles in the process, as well as reflections on what evaluation efforts revealed and what changes were considered.


Case Syntheses Component

State Approach

In the second component of the framework, states were compared along six dimensions that describe the critical content from the case studies. These dimensions flow from the questions outlined in the conceptual framework and seek to provide greater detail on the crucial aspects of the policies. The dimensions also allow for a better cross-case analysis as well as for the examination of the connections between levels of assessment policy. The six dimensions for comparison and analysis are:

History & Originating Dynamics of the Policy includes a discussion of why the policy was initiated and what state action led to its development

Purpose & Objectives outlines the intentions of the policy and the state’s policy priorities

Design & Features addresses how the policy functions after implementation, including data collection and measurement

Leadership & Management examines the roles and actions of policymakers, institutional representatives, and the state agency

Outcomes and Effects summarizes the actual results from the policy, as well as their implications

Conditions Shaping Policy Outcomes analyzes how the policy objectives were made actionable and the extent to which they were achieved. Also, different or unexpected outcomes are discussed. This analysis includes an examination of how the policy contexts influenced the outcomes, as well as the significance of policy design, implementation structures or barriers, and the actions of political or institutional players

In addition to these, a conclusions section is presented that discusses the overall effectiveness of the state policy, what changes might be appropriate for improvement, and what aspects of these policies were successful.


Regional Association Approach

The regional accreditation associations were selected in order to study their policies and standards for the emphases they place on improving student learning and achievement as a requirement for accreditation. The standards and criteria of the associations are analyzed along the following six dimensions:

The history and dynamics of the policy’s development.

The criteria and approach set forth for institutional accreditation regarding assessment and the requirements.

The methods and processes for assessment that the associations prescribe for institutions, including reporting, testing, and data collection.

The institutional support offered to campuses as they implement these requirements, including training, resources, and services.

The relations with state agencies, which includes the influence of the association on institutions, and the coordination, cooperation, communication between the association and state higher education agencies.

An evaluation of the association's policy and include a review and reflection upon its effectiveness.

The discussion of the regional associations provides a review of the associations’ focus on assessment for improving learning and teaching, the types of outcomes measured and processes used, and institutional accountability vs. institutional autonomy. Also, to understand the associations’ engagement with assessment, the analysis also includes the relationship of association to state higher education department, its willingness to work with institutions to meet the criteria, and the association’s efforts to evaluate its assessment program.



Case Analysis Component

The third component of the conceptual framework is analytical. It examines the outcomes of the policies in light of the stated objectives. In earlier research, Nettles & Cole (1999a; 1999b) showed how the states seek to meet a variety of objectives with their assessment policies, from improving student learning to holding institutions accountable for their effectiveness. The objectives for assessment policy and accreditation standards are significant because they reflect policymakers’ perceptions of the academic results and standards of performance that colleges and universities should be achieving. Assessment policy objectives also reveal priorities that have consequences for institutional behaviors/decisions.

Objectives only tell half of the policy story, however. Equally important (and revealing) is an analysis of the intended and unintended outcomes. While a state may have stated objectives for its assessment policy, those objectives are not always achieved, or if they are, there may also be additional outcomes. This distinction between stated policy objectives and outcomes is important, particularly for understanding the dynamics of the policy process at the state level. This distinction has also been addressed in the policy analysis literature. An effort has been made to distinguish between intentional analyses, which focuses on what was or is intended by a policy, and functional analyses, which focuses on what actually happened as a result of a policy[8].

Our goal is to compare the intended and the actual outcomes of the policies, while also attempting to describe the key factors that led to these outcomes. This component of our framework examines the connection between policy objectives and outcomes. It addresses the following questions:

What political circumstances in the state led to the adoption of the policy?

What entities and factors influenced the policy content?

What was the quality of the relationships between colleges and universities and the state government at the time the assessment policy was developed?

This policy context element is concerned with the historical, social, and economic inputs related to the policy’s origin, such how the awareness of a new policy or a change in an existing one was created. Included also were political inputs, such as the governance structure for higher education that was present in a state or the original legislation or political action leading to the development of an assessment policy. The relationships and communications among the governing agencies, the state governments, and the institutions are also key factors in understanding the policy context.

What were the primary objectives of a state’s assessment policy?

What priorities were identified in the policy?

The states and regional accreditation associations have a variety of reasons for adopting assessment policies and standards, and they are designed to meet a variety of objectives. The intentions of a policy include the following: to increase public or fiscal accountability; to improve college teaching or student learning; to promote planning or academic efficiency on campus; to facilitate inter-or intra-state comparisons; and to facilitate the reduction of program duplication.

What were the institutional, political, and financial results of this policy?

How were the institutional, political, and financial results different from those that were expected?

Although a state may have clearly articulated objectives for its assessment policy, those objectives may or may not always be met in practice. There may be important interactions between the objectives; some may complement one another while others might have been at cross-purposes. A policy could have a design including elements that link objective to successful outcomes, while others might face structural or procedural barriers during implementation that undermine their potential. Alternatively, a policy might produce unintended or unexpected outcomes, thereby creating new problems or exacerbating old ones. The distinction between policy objectives and outcomes is significant for understanding the best methods for developing and implementing policy.

How did the interactions among state government officials, the SHEEO agencies, and institutional representatives contribute to these outcomes?

What policy design and structural elements were significant in producing the outcomes?

What contextual factors in the political and social climate for assessment are relevant?

What explanations are possible for any disjuncture between objectives and outcomes?

Our conclusions reflect on the performance of the policy and its effects on assessment practices among institutions. The intent is to determine whether states had been successful in improving teaching and learning, and the reasons for the outcomes. Identifying the relevant factors allows us to highlight the lessons that might be applicable to other states. This analysis considers the interactions between the various policy actors and the differing levels at which policy is affected, e.g., the state, regional, and institutional levels.



[1] Nettles, Cole, & Sharp (1997).

[2] Nettles, M.T.& Cole, J.J.K. (1999b). State higher education assessment policy: Findings from second and third years. Stanford, CA: National Center for Postsecondary Improvement.

[3] Nettles, M. and Cole, J. (1999a). States and public higher education: Review of prior research and implications for case studies. NCPI Deliverable #5130. Palo Alto, CA: National Center for Postsecondary Improvement.

[4] Dubnick, M., & Bardes, B. (1983). Thinking about public policy: A problem-solving approach. New York: Wiley.

[5] Lowi, T. (1972). American business, public policy, case studies, and political theory. World Politics, 16, 677-715.

[6] Palumbo, D. J. (1988). Public policy in America: Government in action. San Diego, CA: Harcourt Brace Jovanovich, Publishers.

[7] Anderson, J. E., Brady, D.W., Bullock, III, C.S., & Stewart, Jr., J. (1984). (2nd Ed.). Public policy and politics in America. Monterey, CA: Brooks/Cole.

[8] Dubnick & Bardes, 1983.




On this page

The three components

Policy Process Component

Synthesis Component

Case Analysis Component

Other Links

NCPI Research



© 2003, National Center for Postsecondary Improvement, headquartered at the
Stanford Institute for Higher Education Research