Institutional Research, Planning & Data Analytics
Program Assessment
- Introduction
- Setting Goals and Outcomes
- Mapping Learning Opportunities
- Assessing Program Outcomes
- Analyzing and Discussing Results
- Using Data for Improvement
Introduction
Program assessment - also known as student learning outcomes assessment - is the systematic and ongoing process of collecting, analyzing and using information about educational programs for the purpose of improving student learning and development. This is accomplished by defining a program's expected learning outcomes, identifying learning opportunities, obtaining a reasonable understanding of whether students are achieving these outcomes, and the using this information to help guide improvements in the program. The complete program assessment process is multi-faceted as displayed in the following figure and explained in the following tabs.Setting Goals and Outcomes
The first step in the program assessment process is defining what you expect students to learn as a result of participating in the program or educational activity. Developing these "outcome statements" is often much harder than it sounds, requiring thoughtful deliberation among a program's faculty and staff. However once developed, outcomes statement can help to guide future curriculum development and program planning.Goals and outcomes should be written from the students' perspective, meaning they should indicate what students are expected to know and be able to do as a result of completing the educational program or activity. They differ from course objectives, which describe what the faculty will cover in a course. Goals and outcome are usually prefaced with the phrase, Students will.... or Graduates will be able to....
Goals are broad statements of competence that the program seeks to instill or develop as a result of a program or activity. They are not directly measureable; rather they reflect more generalized learning expected of students. . Examples of program goals include the following:
- Students will understand 19th Century Literature
- Students will be competent in computer programming
- Students will have an appreciation for history
Outcomes or student learning outcomes (SLOs), are more specific. They define the knowledge, skills, abilities, and habits of mind students are expected to acquire after completing their educational experience. They should be realistic, student focused, and measurable. Contrast the previous statements with the following:
- Students will ...be able to compose well-constructed essays that develops clearly defined claim of interpretations supported by close contextual reading.
- Students will... design, correctly implement and document solutions to significant computational problems
- Students will... formulate, sustain, and justify historical arguments using original ideas
There are many excellent sources on the web on how to write well-crafted program-level outcomes statements. Good places to start are professional and accreditation organizations. A list of these organizations by discipline can be found HERE. For some accredited programs (e.g., Nursing, Social Work, Speech and DFN) learning outcomes are prescribed by the accrediting body. Please reference these organization's standards or assessment manuals for additional information. Web sites of institutions with similar programs are also a good place to look, as are the sites of aspirational educational programs. The University of Wisconsin also has an excellent website on how to develop these statements.
Bloom's Taxonomy is another good reference to help formulate learning outcomes. In the 1950's Benjamin Bloom and his colleagues identified three areas of learning objectives (domains): cognitive, affective and psychomotor. The cognitive domain has been the focus of educators and is often used to help program's structure learning objectives and to develop assessments. The domain is broken into six progressively complex levels of cognition: knowledge, comprehension, application, analysis, synthesis and evaluation. The Taxonomy includes a set of action verbs for each level that can be useful for developing measureable outcomes statements. An example of Bloom's taxonomy is posted to this website. Click HERE to access it.
Mapping Learning Opportunities
The second step in the program assessment process is to identify where SLOs are addressed in the program. For academic programs, outcomes are most often addressed in courses throughout the curriculum. For non-academic programs, outcomes are focused in various activities offered to students outside of the classroom (workshops, fieldwork, etc.). Mapping outcomes to learning opportunities helps programs organize their curriculum / activities to ensure that all outcomes are addressed over a student's academic career, and for identifying any gaps in the learning experience.The result of the mapping process is a curriculum map, or curriculum matrix, that includes a list of courses courses/activities on one axis and the outcomes on the other. In Taskstream, programs have the ability to indicate the level at which the outcome is addressed in the course/activity by choosing among the following options: Introduced (I), Developed (D), or Mastered (M). A well-constructed curriculum map will provide students with multiple opportunities to meet the outcomes and allow for students to be exposed to introductory, developing, and mastery level material. For additional information, please refer to the University of Rhode Island's excellent curriculum mapping webpage.
The following figure, taken from Lehman's own English Department is an example of a curriculum map in Taskstream. Along the top axis are the program's three learning goals and 13 associated learning outcomes. Along the left-side are the program's courses, which in this case are clustered in 100- to 400-level categories. The cells where the outcomes and courses intersect, as noted by the boxes, are labeled I, D, or M. In the example below, Goal 2 is not addressed in 100-level courses.
Assessing Program Outcomes
The process of actually assessing student learning outcomes at the program level may involve numerous approaches. These may differ from program to program depending the types of measures that are collected as well as the program's culture. It is important to keep in mind that whatever strategies are selected, the evidence collected should provide a program with reasonably valid and useful information about student learning and development.In most cases, academic programs will engage in course-embedded assessments. These types of assessments occur in a classroom setting and are designed to generate information about what and how students are learning. They allow faculty to evaluate and improve approaches to instruction and course design in a way that is built into and a natural part of the teaching-learning process. However, in order for these types of assessments to be good measures of student learning at the program level, they should occur over several courses. The curriculum map will indicate the courses where each outcomes is supposed to be addressed Assessments occurring outside of the classroom are also permissible as long as they are aligned with the learning outcomes of the program. Common examples include capstones, culminating research projects, dissertations, etc. For administrative programs, activities such as workshops, orientations and the like are good places for assessments or learning and development to occur. Examples include resume writing workshops, leadership training classes, and athletics.
Before developing an assessment plan, there are several important things to consider. Several of these are described below:
Direct and Indirect Evidence One consideration is deciding on the type of evidence that will be collected. Direct evidence is tangible, visible, self-explanatory evidence of exactly what students have and have not learned. Examples include tests, essays, research papers, presentations, performances, etc. Indirect evidence, on the other hand, provides clues that students are probably learning, but evidence of exactly what they are learning is not as clear. Examples include surveys, focus groups, awards, etc. At Lehman, all academic programs must include at least one direct measure in their academic assessment plan. Indirect measures can be used to supplement direct evidence, but they cannot be the only measure used. For additional information about direct and indirect evidence, please refer to the following website.
Quantitative and Qualitative Another consideration when developing am assessment plan is the type of data that will be generated. Data fall into two categories: quantitative and qualitative. Quantitative data is numerical data that can be summarized and analyzed statistically. Examples include ratings scales (Likert scales), rubric scores, test scores, and performance indicators. Qualitative data is non-numerical and are usually used to identify recurring themes and patterns. Examples include focus group notes, interviews, comments, and observations. However, for qualitative data to be useful, these assessments need to systematic and structured. In other words, the assessments cannot be solely based on anecdotal musings of students. Qualitative data are often used in conjunction with quantitative data (mixed-methods) to provide additional insight - to help paint a broader picture -- into student thinking and are useful for assessing behaviors, attitudes, and habits of minds.
Objective and Subjective Objective assessments are assessments in which the scoring procedure is completely specified enabling agreement among different scorers. In other words, there is no professional judgement involved. Examples include correct/incorrect answer tests and multiple choices exams. This does not mean useful information cannot be gleaned from these types of assessments; most standardized tests follow this format. Item analyses, for example, may reveal interesting patterns that can help inform pedagogy, guide course emphasis, etc. Subjective assessments, in contrast, are assessments wherein the impression or opinion of the assessor determines the score or evaluation of performance a test in which the answers cannot be known or prescribed in advance. The most commonly used type of subjective measures are written assignments. To help minimize subjectivity, rubrics and multiple raters should be employed when scoring student artifacts. At Lehman, all subjective assessments must be accompanied by a rubric/ scoring guide.
Analyzing and Discussing Results
The penultimate phase of the assessment process is analyzing and discussing the assessment results. After a program has conducted their assessments, they need to gather their data (quantitative or qualitative) and interpret the results. In many cases the analysis involves calculating basic descriptive statistics (tallies and percentages) of how students performed in relation to the benchmark they have set. In others cases, it involves conducting more sophisticated analyses using inferential statistics or additional data to supplement the assessment data that have already been collected. For example, programs may want to supplement their findings with student demographics, majors, prerequisite course information, and grades so that cross-tabulations can be made. The Office of Institutional Research, Planning, and Assessment can assist you with these analyses.Qualitative analysis, unlike quantitative analysis, does not follow a prescriptive set of rules; however, analyzing qualitative data does require thoughtful consideration and skill. Qualitative data analysis involves accurately transcribing your data, organization it in a comprehendible way, and coding it. Coding involves categorizing data into themes or concepts and allows the researcher to identify common patterns and trends. Lehman College has atlas.ti software available to faculty and staff to help with this process. Please contact the Lehman College Help Desk to obtain the software.
After an initial analysis of your data, you need to interpret what these results mean. This is best done in consultation with other stakeholders in the department, including chairs and directors. Additional input into the process provides for added insight and perspective into the results that may not be evident to the person coordinating the assessment process (e.g., perhaps the results were caused by extraneous factors that were not considered). In these discussions, the department should identify potential remedies (interventions) that could help to improve student learning and development moving forward.
Initial discussion about the results often revolve around the following questions:
- Are the results valid?
- Why did students not do as well as expected?
- Why are we assessing this competency?
- What is the best way to assess this competency?
- Are the expectations too high / too low?
- How can we assess this outcome well if students cannot write well?
- We have no prerequisites that address this topic, what do you expect?
At the end of these discussions, it is not unusual for everyone in a department to interpret the results the differently or to disagree on what next steps should be. However, by the end of these discussions, departments should come to agreement about possible future actions to help enhance student learning and development.
What if the results are very good?
This question is often asked. For example, let's say that 90% of students met the established benchmarks, how can we improve student learning? First, it is important to recognize and celebrate this finding and share it with others. However, this does not mean that the assessment results were useless. For example, it could mean that the standards are too low. If almost everyone is performing at a very high level, perhaps the leaning outcome is not challenging enough. These results could also mask problems students are experiencing on particular topics. For example, perhaps item analyses reveal that students are consistently not scoring as well on particular items, which all address the same topic. This information can be used to develop a plan to emphasize these topics more deliberately in the future.
Using Data for Improvement
After you have analyzed your data and interpreted the results with your colleagues, the final phase of the assessment process involves using the results effectively to make improvements designed to enhance student learning. Frequently referred to as "Closing the Loop," this last phase of the process is viewed as the most difficult to implement and therefore is often overlooked. However, failure to engage in this process means that assessment results are just an exercise in data collection, with the results sitting on shelf collecting dust or buried deep on a hard drive.The process of Using the data for improvement is multifaceted. It involves developing an action (operational) plan and implementing it, analyzing additional data, and making a determination if the strategy was effective in enhancing student learning.
Creating an Action Plan and Implementing It The action plan are the strategies (actions) that will be implemented to improve teaching and learning. Some strategies are easy to implement and can be done almost immediately with little fanfare. Others take time to develop and roll-out. Further, some strategies can be implemented with no additional funding, while others may require additional funding. Examples of possible actions include:
- Creating new lesson plans
- Developing supplemental materials (study guides)
- Implementing new pedagogical approaches
- Suggesting curriculum revisions (new courses, prerequisites)
- Creating new activities / workshops
- Purchasing new technology / equipment
Analyzing your data and making a determination After an action has been developed and implemented, data needs to be collected a second time to determine whether it was impactful. Like the previous phase of the assessment process, the results should be analyzed and deliberated upon by members of the department and they should come to a determination as to whether the intervention was effective in improving student and learning and development.