Assessment 360: From Curriculum Mapping to Closing the Loop

Institutional Level: 
Higher Ed
Abstract: 

We will present one large institution’s processes and methodologies for assessment and share best practices on how to generate, score, and analyze data from assessment of program and general education learning outcomes for improvements in teaching and learning. Samples of this assessment lifecycle from our bachelor’s programs will be shared.

Extended Abstract: 

A common challenge for higher education institutions is designing meaningful and reliable assessment programs that effectively measure student learning as a function of program outcomes. Assessment programs can serve multiple purposes, the first and foremost being the improvement of teaching and learning as described by Halpern, Horton, Peden, & Pittneger (1993). A comparison of student learning outcomes assessment within and across institutions can provide useful feedback to faculty in order to know which educational practices are effective for accomplishing student learning outcomes, and to ensure that our graduates are not only obtaining the required knowledge and skills for the discipline, but are also trained for success in the workforce and society.

Assessment is also driven by the accountability expectations from national and state governments, as well as regional accrediting agencies. Federal, state, and local governments want to ensure public money is being used to support higher education in the form of student assistance programs and that financial aid is being spent responsibly. Accrediting bodies also encourage universities to identify objective, quantifiable student learning outcomes and demonstrate through documentation that the results of assessment data are used for continuous improvement. Similar expectations to ensure the college experience is applicable to the work force also come from other stakeholders such as potential employers. For example, according to findings from a 2015 survey of employers by the Association of American Colleges and Universities (AAC&U), learning outcomes such as communication, team work, ethics, critical/analytical skills, application are rated as ‘Very Important’ by at least four in five employers.

This presentation describes the curricular assessment cycle framework at American InterContinental University (AIU), which serves 10,000 primarily non-traditional, adult, online, undergraduate students. The starting point for the cycle is a curriculum map that ties student learning outcomes for each program into the institutional goals at the macro level and to course level objectives at a micro level. This is followed by the development of authentic assessments with custom rubrics that leads to accurate and consistent scoring of the assessments. This is turn helps generate data that is examined methodically across multiple variables to action plans for curricular and pedagogical improvements.

The Assessment Lifecycle

https://photos.app.goo.gl/PljD1ci7xZNY3S7J2

Program Learning Outcomes (PLOs) for each academic program representing high level competencies that students should acquire as they progress through a program. The PLOs are articulated in a psychometrically sound manner that enables the measurement of competencies at appropriate cognitive levels. The degree-level guidelines for AIU, as operationalized by a Degree Level Differentiation Rubric (adapted from the Lumina Foundation’s Degree Qualification Profile), provide a framework for articulating the depth and breadth of knowledge, skills, and abilities that graduates are expected to possess upon completion of each of the respective degree levels. It also distinguishes the levels of achievement with “increasing levels of challenge for student performance” from the associate to the master’s level. These guidelines are used to ensure that the levels of performance expected of our students at each of the three degree levels are reflected appropriately in the Program Learning Outcomes developed for each program.

AIU’s Institutional Outcomes (IOs) derived from its mission statement are mapped, where appropriate, to PLOs at a high level in each program’s Curriculum Map (CM). The PLOs are also aligned with the standards of programmatic accreditors (where applicable). General education outcomes are also mapped to core courses in the CM in the undergraduate programs and are assessed in specific general education courses as well as in the core areas of the program.

Every degree program has an Assessment Plan which marks the courses that address specific PLOs at the Introductory, Reinforce, and Mastery levels. Not only are PLOs in the core areas of the discipline listed, but General Education Outcomes (GEOs) in the undergraduate programs are listed both in general education courses as well as in the core areas of the discipline. Courses where the PLOs and GEOs are assessed (at the Introductory, Reinforce, and Mastery levels) are denoted with an “A” on the Curriculum Map enabling us to track the development of student learning throughout the program. 

A detailed syllabus/course roadmap/blueprint (called a Master Course Framework, MCF) lists the Terminal Course Objectives (TCO). TCOs are 3-7 precise, directly measurable competency statements in a given area of study, couched in Bloom’s (1956) taxonomy. These TCOs are mapped to specific PLOs, and GEOs. Unit Objectives are also developed to ensure that the unit’s learning materials and assignments/assessments support the outcomes. Faculty and Deans are responsible for identifying core competencies and outcomes for programs and courses during the program development phase, developing the assessments framework, and creating the Common Assessment and its accompanying Scoring Rubric.

Assessment of the PLOs and GEOs in specific courses occurs through a direct measure called the Common Assessments. Common Assessments are assignments that are designed to tap specific PLOs and GEOs at the Introduce, Reinforce, and Mastery levels per the Curriculum Map (CM). Common Assessments can take a variety of forms (paper, presentation, survey creation, case study, exam, portfolio, as appropriate to the assignment and the discipline) and application is emphasized. They are strategically placed in specific courses where the Program Learning Outcomes are assessed. The same Common Assessment is administered in all delivery modalities (online and ground) and across all sections of the course (the assessment and the scoring rubric are embedded in the course LMS) to facilitate consistency in analysis and comparisons.

Rubric development and implementation plays a key role in the objective assessment process (Benjamin, 2007). Detailed rubrics for scoring the Common Assessments help provide clear direction and focus to instructors for scoring the assessment. Faculty members receive detailed scoring rubric training. They are responsible for administering the assessments in the class that they teach, and scoring the rubric for the Common Assessment. They enter the assessment score on the rubric indicating student attainment on the PLOs/GEOs. Scores are entered into an assessment system database, which are pooled across multiple variables, and can be organized, searched, and analyzed by relevant PLOs/GEOs, courses, campuses, delivery modality (blended versus online), etc.

Benchmarks are set by each program and program deans and faculty review assessment reports and data to determine whether or not outcomes are met. Some programs, such as the Bachelors of Science in Information Technology program take the IS2010 exam at the culmination of the program. The results from such external assessments are triangulated with the Common Assessment data for comparison. The external assessments also allow AIU to compare its outcomes with that of other peer institutions.

When the data shows that a PLO has not been achieved, the data is examined for trends and opportunities in an attempt to identify the problem in an effort to find a solution. The problem  may be with the assessment itself, the rubric, faculty scoring and inter-rater reliability, effectiveness of the learning materials, quality of instruction etc.,. Once problems are identified, it leads to the creation of detailed action plans for improvements in curriculum, instructional delivery, or support services. Action plans range from minor modifications to major program overhaul. They are developed, approved, and implemented collectively by the Deans, Program Chairs, and faculty with an ongoing cycle of review and adjustments.

Each program’s Assessment Plan also identifies the year in which specific Program Learning Outcomes will be assessed as part of a three-year cycle. This typically culminates in a comprehensive program review wherein the faculty and Deans use a variety of indirect assessment measures such as data from student, faculty, alumni, and employer surveys, as well as program statistics, to complement the direct methods. Collectively, the assessment data provides AIU with progressive snapshots of student performance throughout their academic journey. It results in comprehensive action plans that, when executed, contribute to the continuing improvement process in areas such as curriculum, instruction, and resources.

This education session presentation will describe the processes and methodologies for assessment and share best practices on how to develop, administer, score, and analyze results for improvements in teaching and learning. It will also provide samples of this assessment lifecycle from our Bachelors of Science in IT.

Those who will benefit from this presentation include educators, administrators, faculty, curriculum/course designers, and training professionals in Higher Ed institutions from all levels of expertise. 

The session outcomes are as follows: 

  1. Authors will present the process for designing and implementing a curriculum assessment program for undergraduate programs.

  2. The process of collecting, analyzing, and reviewing assessment data will also be presented, along with designing and implementing action plans based on the data review.

  3. Tips for incorporating general education outcomes within the program assessment map and ensuring faculty buy-in will be discussed.

  4. Participants will engage in mapping and scaffolding learning outcomes (course-level, program level, general education, and institutional level).

References

Benjamin, S. (2007). The quality rubric: A systematic approach for implementing quality principles and tools in classrooms and schools.  Milwaukee, WI: American Society for Quality Press.

Halpern,  J. S., Horton, C. P., Peden, B. F., & Pittenger, D. J. (1993). Targeting outcomes: Covering your assessment concerns and needs. In T. V. McGovern (Ed.). Handbook for enhancing undergraduate education in psychology. Washington, DC: American Psychological Association.

Hart Research Associates. (2015). Falling short? College learning and career success. Washington, DC: Hart Research Associates.

Conference Track: 
Innovations, Tools, and Technologies
Session Type: 
Education Session
Intended Audience: 
Administrators
Design Thinkers
Faculty
Instructional Support
Training Professionals
Researchers