Program Assessment

A program is defined as a cohesive course of study that typically leads to an academic award, such as a degree, certificate, or diploma. Assessment at this level is singled out as especially important because academic programs are the primary means by which the college pursues its mission. Faculty members have been very active in defining and assessing goals at the section and course level, but the emphasis on assessment at the program level is relatively new. At this level, goals for student learning are usually determined by interdisciplinary groups because most programs combine the efforts of faculty and staff working in a wide variety of discipline areas.

Our degree programs include the Associate in Arts (AA), the Associate in Fine Arts (AFA) degrees, Associate in Applied Science (AAS), and Associate in Science (AS) degrees. The relationship between program-level assessment and college-wide goals for student learning is illustrated in the diagram below.

Assessment is a cyclical process, centered on program goals, and geared toward making improvements in programs. In practice, this process is enacted in eight steps:

  1. Examine Program Goals
    If no program goals exist, they must be created, with input from constituent groups who have a vested interest in the program (e.g., faculty, administrators, students, transfer institutions, accrediting bodies, professional organizations). Program goals are general statements about student learning outcomes that can be translated into measurable outcomes. If program goals already exist, they should be reviewed with an eye toward relevance, consistency, and measurability.
  2. Develop Student Competencies
    Student competencies are statements about expected student learning outcomes that can be measured. Student competencies should provide comprehensive coverage of the stated program goals. That is, every component of a program goal should be measured by at least one competency.
  3. Determine Measurement Points
    There are many possible measurement points within a lengthy program, and it helps to identify these possible measurement points before selecting the points where assessment will take place. A Competency Map is a matrix that plots program goals and competencies against program components, such as courses and activities. Direct Measures are assessments that involve actual student performance, such as tests, papers, presentations, and productions. Indirect Measures ask students to report on their perceived learning through a survey, interview, or focus group.
  4. Develop Measurement Tools
    Once measurement points are identified, staff and faculty members must determine how the evidence of student learning is going to be collected. A Rubric is a fairly simple measurement tool for rating student performance against some set of criteria. Rubrics define varying levels of student competency with descriptors to facilitate objective scoring by different raters. A Survey can also be used to measure perceptions of student learning within students, faculty, employers, transfer institutions, or intern supervisors.
  5. Collect Data
    The first few rounds of data collection will serve as a pilot test of any new measurement tool. The tools should be modified, if necessary, to ensure the credibility and integrity of the collected data. The College is using a web-based software system called eLumen to organize our collected data. Data are stored in eLumen by student name and ID to allow faculty to review assessment data.
  6. Review Data
    Collected data must be compared against program goals and competencies to assess the program’s progress in achieving its goals. By contrasting performance on different goals, assessment data can help to identify relative strengths and weaknesses within a program. Data can also be compared against internally or externally determined benchmarks. A benchmark is an evaluative standard, such as “90% of students should meet or exceed a stated competency by the end of their program.”
  7. Implement Change
    A complete cycle of assessment should lead to informative feedback that can be used to strengthen a program. This is often called “closing the loop” in assessment programs. Assessment data might help a program to make needed changes in the curriculum, the pedagogy, the program goals and competencies, or the assessment methods used. Assessment data might also help to reaffirm strong programs that should be maintained as is for the near future.
  8. Document the Process
    The steps and outcomes of an effective assessment program must be documented to ensure continuity. We will always have some turnover among faculty, students, and staff, so careful documentation will ensure that our assessment efforts continue as an essential part of the college and each program.