Bridge Evaluation Plan

March 6, 2009

Bridge Program Evaluation Plan
David Brigham, DCAS

Evaluation Purpose

Bridge Program Goals

Evaluation Object

Evaluation Focus

Institutional Context

Stakeholders

Evaluation Questions

Program Completion and Student Achievement

Academic Progress

Program Implementation

Information Collection

Information Analysis

Reporting Evaluation Findings

The following evaluation plan for the Bridge Program was prepared at the request of the Office of Academic Affairs (OAA) at Empire State College by David Brigham, Director of College Academic Support, in consultation with Yong-Lyun Kim, Lead Research Analyst, Outcomes Assessment and Institutional Research. The plan contains a description of proposed evaluation activities for the following college terms: Term I (September 2008), Term 3 (January 2009), Term 5 (May 2009), and Term 1 (September 2009).

Evaluation Purpose

The purpose of this evaluation is to determine the extent to which the Bridge Program is meeting its goals and to track the academic progress of students participating in the Bridge Program. Evaluation data obtained at the end of each college term will be reviewed and used to make adjustments in the Bridge Program in an ongoing manner throughout the course of the evaluation (September 2008-December 2009).

This evaluation uses an emergent, formative design that will change in response to issues, concerns, or circumstances arising during the course of the evaluation. Possible design changes include but are not limited to revisions to program goals, the focus of the evaluation, evaluation questions, and types of data collected.

Bridge Program Goals

  • Seventy percent of students enrolled in the Bridge Program will complete it.
  • Seventy percent of students completing Bridge Program modules will receive an overall score of meets expectations or higher on the final writing assessment and an acceptable score (to be determined) on the reading (COMPASS) skills assessment.
  • Seventy percent of students receiving an overall score of meets expectations or higher on the final skills assessments of the Bridge Program will maintain satisfactory academic progress as defined by the college’s student academic progress policy for two terms following completion of the Bridge Program.

Evaluation Object

The Bridge Program is a 15-week program of instruction designed to strengthen the writing and reading of students identified as having near-college level skills in these areas. The program has its roots in Empire State College’s Front Porch Project, a college-wide initiative which began in 2003 with the intention of providing increased college support for students during their first two terms of enrollment. The local Directors of Academic Support developed the Bridge Program under the leadership of the Assistant Vice President of Academic Affairs for Academic Services.

The Bridge Program began enrolling students in September 2008 as a voluntary, non-credit, tuition-free program of instruction. As part of the process of applying for admission to the college, students complete a skills assessment administered by the Admissions Office. Based on the results of the admissions assessment, the Admissions Office refers students determined to have near-college level writing and reading skills to the Director of Academic Support (DAS) at the student’s center for review. The DAS reviews the student’s admission assessment and, if indicated, may do further testing with the student. In addition to new students, currently enrolled students may also enter the Bridge Program by referral to center DAS from a mentor or instructor or through self-referral.

In December 2008 the DAS proposed a college-wide approach to assessing the skills of student’s referred to them for possible placement in the Bridge Program. They recommended that all DAS administer the COMPASS reading assessment and the same writing assessment (to be determined) to student referrals. Students who score below established cut scores would be placed in the Bridge Program. This evaluation plan assumes this recommendation will be implemented.

In its current form, the instructional content of the Bridge Program consists of eight modules developed in ANGEL, the college’s online learning course management system. Modules contain learning objectives, readings, activities, discussions, and writing assignments. DAS teach the same Bridge Program content and administer the same writing and reading assessment at the end of the Bridge Program to measure skill development. This assessment generates one of four ratings: does not meet expectations, approaches expectations, meets expectations, or exceeds expectations. Students who receive a rating of meets expectations or exceeds expectations are considered to have satisfactorily achieved Bridge Program learning objectives.

Before the start of a college term, each DAS receives a copy of the ANGEL master to customize for the local center. The DAS works with students to decide on the appropriate method for delivering the Bridge Program as it may be delivered as an individualized study, group study, blended study, or fully online. The Bridge Program is currently scheduled to run for terms 1, 3, and 5 each year.

Evaluation Focus

This evaluation plan focuses on the following four aspects of the Bridge Program: placement and nonplacement, instructional effectiveness, academic progress, and program implementation. Exploring each aspect should yield information that will help determine the extent to which Bridge Program goals are being met and provide information that will enable changes to be made in the program to facilitate goal attainment.

Institutional Context

Through the establishment of the Front Porch Project, Empire State College has made an institutional commitment to provide academic support to students in their first two terms of enrollment. The 2006-2010 Strategic Plan Implementation in Progress (March 2008) provides a checklist of objectives accomplished in several strategic areas including in the area of promoting student success by supporting “vulnerable” students in their first two terms of enrollment. Over the past two years, the college has also demonstrated its commitment to providing academic support for students by creating and filling eight DAS positions at the centers and establishing an Office of College Academic Support headed by a new Director of College Academic Support (DCAS). Among the DCAS’s responsibilities is implementing and monitoring the Bridge Program.

The Bridge Program is being implemented at a time of significant change at Empire State College. The Front Porch Project began under former President Moore, continued one-year by acting president Joyce Elliot, and has now transferred to a new president, Alan Davis, who began his tenure in August 2008.

The Bridge Program is new to the college and represents a change in how students receive academic support. Before the DAS began working in the centers, academic support was provided primarily by mentors.

Stakeholders

The following groups have a stake in the outcome of this evaluation. It is important that this evaluation address their concerns, if the results of this evaluation are to be accepted and effectively implemented. Stakeholders include students, DAS, mentors, and college administrators.

Evaluation Questions

The following sets of evaluation questions and sub-questions focus on each of the four areas: placement and nonplacement, program completion and achievement, academic progress, and program implementation. These questions will drive data collection and analysis.

For the purposes of this evaluation, “placement” will be defined as student referrals that DAS designate for enrollment in the Bridge Program. “Nonplacement” will be defined as student referrals that DAS do not designate for enrollment in the Bridge Program. Based on secondary assessment test scores (COMPASS and writing assessment), DAS may recommend that students not placed in the Bridge Program receive another form of academic support or no additional academic support. Data will be collected on students in each group, those placed and not placed in the Bridge Program, so that the academic progress of both groups of students can be tracked and compared. Collecting this information will enable assessment cut scores to be adjusted, if necessary.

The following evaluation questions focus on student placement and nonplacement:

  1. What percentages of referrals were enrolled in the Bridge Program, recommended for another form of academic support, or found not to require academic support?
    1. Who referred these students to the DAS?
    2. What are their admissions, reading assessment (COMPASS), and writing assessment scores?
    3. How much credit did they have when assessed for possible Bridge Program placement?
    4. What is their center?
    5. What was their GPA (if applicable)?
    6. What was the criteria for placement?
  2. What are the differences (1a.-f. above) between referrals placed in the Bridge Program, referrals recommended for another form of academic support, and referrals found not to require academic support?

Program Completion and Student Achievement

Of students placed in the Bridge Program, it is important to know the percentage of students who complete the program and how successful they were in achieving learning outcomes. It would also be useful to understand why some students fail to complete the program and why some students fail to achieve course outcomes. Therefore, this evaluation includes questions that seek to understand the characteristics of completers and non-completers as well as the characteristics of students who achieve learning outcomes and those who do not. In addition, since the Bridge Program could potentially be delivered in four modes (individualized, group study, blended, and online), and administered at up to eight different centers, information will be collected to detect patterns that may be associated with delivery mode or center.

The following evaluation questions focus on program completion and student achievement:

  1. What are the characteristics of Bridge Program completers and non-completers?
    1. What percentage of students who start the Bridge Program complete it?
    2. How many credits did they carry while taking the Bridge Program?
    3. What was the mode of delivery?
    4. What was their center?
    5. What was their admissions, reading (COMPASS), and writing scores?
    6. What were their post-test reading (COMPASS), and writing scores?
    7. What was their workshop rating?
    8. How many modules were completed?
    9. What was the student’s own assessment of learning?
    10. What was the instructor’s assessment of the student’s learning?
    11. What percentage was rated meets expectations or higher?
    12. What were non-completer’s reasons for not completing?
  2. To what extent do students who complete the Bridge Program improve their reading and writing skills?
  3. What are the differences between the characteristics (1a.-1l. above) of completers rated as meets expectations or higher and completers rated below meets expectations?

Academic Progress

Tracking the progress of students who participate in the Bridge Program is a crucial aspect of this evaluation. The Bridge Program was implemented with the intention of facilitating the academic progress of students with near-college level skills through the first two terms of college, if not to graduation. Therefore it is necessary to monitor the short- and long-term progress of students placed in this program. In addition, it will be useful to track the short- and long-term progress of students referred to DAS for skills assessment but who were not placed in the Bridge Program. This information will help to inform whether the skills assessment cut scores need to be adjusted.

The evaluation of academic progress will use the college’s existing indicators of student progress as these are commonly understood and accepted by the stakeholders. In addition, using existing indicators facilitates data collection and reliability. These indicators will include: satisfactory academic progress (SAP), student grade point average (GPA), credits earned, and course or study grades.

The following evaluation questions focus on academic progress:

  1. To what extent is achievement level in the Bridge Program associated with short-term (two terms) and long-term (until graduation) academic progress?
    1. With SAP?
    2. With GPA?
    3. With credits earned?
    4. With specific studies completed?
    5. With grades received on these studies?
    6. Was there another skills development intervention during this time?
    7. What is their graduation rate? (long term only)
    8. What is their time to graduation? (long term only)
  2. What is the short-term (two terms) and long-term (until graduation) academic progress of referrals recommended for another form of academic support or found not to require academic support?
    1. What is their SAP?
    2. What is their GPA?
    3. What was their admissions and reading (COMPASS), and writing assessment scores?
    4. How many credits have they earned?
    5. What specific studies have they completed?
    6. What grades did they receive on these studies?
    7. Was there another skills development intervention during this time?
    8. What is their graduation rate?
    9. What is their time to graduation?

Program Implementation

When new programs such as the Bridge Program are launched, it is important that the evaluation design provides a way to capture feedback from those in the field who are delivering the program (i.e., the DAS) and those who experience the program (students). This feedback is necessary to obtain information that will permit improvements to be made to successive program launches. Therefore, this evaluation plan includes questions that attempt to obtain feedback on specific aspects of Bridge Program implementation with an eye toward identifying and correcting implementation weaknesses. These areas include communication about the program, the student selection process, registering students for the program, distributing and setting up course shells, the instructional quality of the modules, the usefulness of assessments, and unanticipated issues.

The following questions focus on program implementation:

  1. To what extent did centers implement the Bridge Program as planned?
    1. What delivery mode(s) did DAS use?
    2. Did DAS administer the pre- and post-assessments to each student?
    3. Did DAS work though the content as it was presented?
    4. What parts (if any) of the Bridge Program were DAS not able to implement?
  2. How well did the student selection process work?
    1. Did DAS receive referrals and supporting information in a timely fashion?
    2. To what extent were students with college-level skills referred to DAS?
    3. What problems (if any) were encountered with the selection process?
  3. How well did the registration process work?
    1. To what extent did the registration system permit convenient and efficient registration into the Bridge Program?
    2. What problems (if any) were encountered with the registration process?
  4. How well did the distribution and set up of ANGEL shells work?
    1. Were shells received in a timely fashion?
    2. Did instructors and students obtain ANGEL access in a timely manner?
    3. Were directions on setting up ANGEL adequate?
    4. Was the ANGEL shell fully functional?
  5. Were the pre- and post-assessments useful and appropriate?
    1. Was the content of the assessments at the appropriate level of difficulty?
    2. How well did the assessment yield useful information?
    3. What problems (if any) were encountered with using the assessments?
  6. To what extent were the learning activities and materials appropriate for students?
    1. Were learning outcomes clearly specified?
    2. How well were the learning activities aligned with the learning outcomes?
    3. Was the overall instructional design sound?
    4. Was instruction pitched at the right level of difficulty for students?
    5. What improvements to any aspect of instruction need to happen?
  7. What unanticipated outcomes occurred?
  8. How satisfied were students with the Bridge Program?
    1. To what extent do students believe their reading and writing skills improved as a result of the program?
    2. To what extent do students believe the investment of their time and effort in the Bridge Program was worth any benefit they may have derived from it?
    3. To what extent were students satisfied with:
      1. Textbook
      2. Learning activities
      3. Instructor performance
      4. Pre- and post- assessment
      5. Workload
      6. Difficulty
      7. Overall program experience
  9. What improvements to any aspect of instruction do students recommend?

Information Collection

The Office of College Academic Support (OCAS) will manage the collection of information for this evaluation. The table below summarizes the types of information OCAS staff will collect to address the evaluation questions. The table is organized by evaluation focus (column 1) and information source (columns 2-5). Individual cells under the information sources indicate the collection method (e.g., questionnaire). OCAS will be responsible for developing collection tools and protocols and storing collected information in a secure place.

Information Collection Table

Placement and Nonplacement Datatel DAS Students Admissions
  1. Percent of referrals enrolled in Bridge, recommended for other support, or found not to require support records retrieval questionnaire records retrieval
    1. Source of referral questionnaire
    2. Admission, reading (COMPASS), writing scores records retrieval questionnaire records retrieval
    3. Number of credits records retrieval
    4. Center records retrieval questionnaire
    5. GPA records retrieval records retrieval
    6. Criteria for placement questionnaire
  2. Differences (see 1a.-1f. above) between student placed in Bridge, recommended for other support, and found not to require support.

Program Completion

  1. Percent completing/not completing records retrieval questionnaire
    1. Number of credits while taking bridge records retrieval
    2. Delivery mode questionnaire
    3. Center records retrieval questionnaire
    4. Admissions, reading (COMPASS), writing scores records retrieval questionnaire records retrieval
    5. Post-test reading (COMPASS), and writing scores records retrieval questionnaire f. Workshop rating (grade)records retrieval questionnaire
    6. Number of modules completed questionnaire
    7. Students own assessment of learning questionnaire
    8. Instructor’s assessment of learning questionnaire
    9. Percentage rated ab and below meets expectations records retrieval questionnaire
    10. Non-completers reasons questionnaire
  2. Improvement in writing / reading skills records retrieval questionnaire
  3. Characteristics (see 1a.-1k. above) of completers rated exceeds expectations or above and completers rated below exceeds expectations records retrieval questionnaire

Academic Progress

  1. 1. Bridge enrollees (post-assessment scores and workshop rating) records retrieval questionnaire
    1. SAP records retrieval
    2. GPA records retrieval
    3. Number of credits records retrieval
    4. Specific studies records retrieval
    5. Study grades records retrieval
    6. Other interventions questionnaire
    7. Graduation status records retrieval
    8. Time to graduation records retrieval
  2. Not placed (other support and no support) records retrieval questionnaire
    1. SAP records retrieval
    2. GPA records retrieval
    3. Admissions and assessment scores
    4. Number of credits records retrieval
    5. Specific studies records retrieval
    6. Study grades records retrieval
    7. Other interventions questionnaire
    8. Graduation status records retrieval
    9. Time to graduation records retrieval

Program Implementation

  1. Implemented as planned
    1. Delivery mode(s) quest / interview
    2. Pre- Post-assessment administered quest / interview
    3. Worked through content quest / interview
    4. Parts of Bridge not implemented quest / interview
  2. Selection process
    1. Received information quest / interview
    2. College-level referrals quest / interview records retrieval
    3. Problems encountered quest / interview
  3. Registration process
    1. Convenience and efficiency quest / interview questionnaire
    2. Problems encountered quest / interview
  4. Distribution/Set-up ANGEL shell
    1. Timely distribution quest / interview
    2. Student / instructor access quest / interview
    3. Set up directions quest / interview
    4. ANGEL functional quest / interview
  5. Pre-and Post-assessments
    1. Content difficulty level quest / interview
    2. Utility of information quest / interview
    3. Problems using quest / interview
  6. Learning activities appropriate
    1. Learning outcomes quest / interview
    2. Overall instructional design quest / interview
    3. Instruction at right level quest / interview
    4. Improvements suggested quest / interview
  7. Unanticipated outcomes (DAS) quest / interview
    1. Perceived improvement (students) quest / interview
    2. Cost worth benefit (students) quest / interview
    3. Student satisfaction with..quest / interview
      1. Textbook quest / interview
      2. Learning activities quest / interview
      3. Instructor performance quest / interview
      4. Pre- and post-assessment quest / interview
      5. Workload quest / interview
      6. Difficulty quest / interview
      7. Overall program experience quest / interview
      8. Improvements needed (students) quest / interview

Information Analysis

The Office of College Academic Support (OCAS) in collaboration with the Office of Outcomes Assessment and Institutional Research (OAIR) will be responsible for the overall analysis and interpretation of evaluation information. OCAS will organize and maintain descriptive and qualitative data and bring it to bear on the evaluation questions and subquestions. OCAS will request assistance from OAIR for the analysis and interpretation of quantitative data for statistical analysis and for reviewing OCAS’s analysis and interpretation of evaluation information.

Analysis and Interpretation: Placement and Non-placement

Question 1: What percentages of referrals were enrolled in the Bridge Program, recommended for another form of academic support, or found not to require academic support?

Analysis:

  • Calculate percentages based on the total number of referrals and the number of referrals designated to each category (see question 1 above). 
  • Identify criteria for placing referrals in each category. 
  • Analyze data by center and source of referral to identify differences.

Interpretation: 

  • Use the percentage of student referrals in each category as a baseline for comparison for future terms.
  • Offer plausible explanations for patterns emerging from analysis by center and source of referral.

Method: frequency, content analysis

Question 2: What are the differences between the characteristics of student referrals placed in the Bridge Program, recommended for another form of academic support, and found not to require academic support?

Analysis:

  • Compare mean differences in terms of number of credits, GPA, and admission and assessment scores between referrals designated to each category (see question 2 above).
  • Compare criteria used for placement.
  • Analyze data by center and source of referral and identify differences.

Interpretation:

  • Use the size of mean differences between the characteristics of referrals placed in each category for comparison in future terms.
  • Offer plausible explanations for patterns emerging from analysis by center and referral source.

Method: frequency, t-test of group differences

Analysis and Interpretation: Program Completion and Student Achievement

Question 1: What are the characteristics of Bridge Program completers and non-
completers?

Analysis:

  • Calculate percentage based on the number of students who start the program (completing the first assignment) and those who complete all modules and the post-assessment.
  • Compare means for initial skills assessments, number of credits carried, GPA, and admissions score of completers with means of non-completers.
  • Compare data for completers and non-completers by center, mode of delivery, and number of modules completed
  • Conduct a content analysis of students’ reasons for not completing the Bridge Program.

Interpretation: 

  • Use the percentage of students who complete the Bridge Program as a baseline for comparison for future terms.
  • Identify differences between those who complete and those who do not in terms of number of credits, GPA, and admission scores and attempt to arrive at plausible explanations.
  • Identify top three reasons for students not completing and use them as a basis for possible revisions to the program or procedures.
  • Offer plausible explanations for differences identified in analysis by center, mode of delivery, and number of modules completed.

Method: frequency, t-test of group differences, content analysis

Question 2: To what extent do students who complete the Bridge Program improve their reading and writing skills?

Analysis:

  • Calculate pre-assessment group means and post-assessment group means.
  • Do a content analysis of students self-assessment and instructor’s assessment of skills.

Interpretation:

  • Do a t-test on the group means to determine if the difference is statistically significant. 
  • A significance level of .05 or lower will indicate a significant difference between the means.
  • Determine the extent to which students self-assessments and instructors assessment of skill support of conflict with pre- and post-assessment data.

Method: t-test of group differences, content analysis

Question 3: What percentage of students who complete the Bridge Program are rated meets expectations or higher and lower than meets expectations?

Analysis:

  • Calculate percentage based on the number of students who complete all modules and the post-assessment and receive a rating of meets expectations or higher and the number who receive a rating lower than meets expectations.
  • Compare means for initial skills assessment, number of credits carried, GPA, and admissions score of those rated meets expectations or higher with those who are rated lower than meets expectation.
  • Compare data for those rated meets expectations or higher with those who are rated lower by center, mode of delivery, and number of modules completed
  • Conduct a content analysis of students’ assessment of their own learning and a content analysis of DAS’s assessment of student learning.

Interpretation:

  • Use the percentage of students who complete the Bridge Program with a rating of meets expectations or higher as a baseline for comparison for future terms.
  • Identify differences between students rated meets expectations or higher and students rated lower than meets expectations in terms of initial skills assessment, number of credits carried, GPA, and admissions score and attempt to arrive at plausible explanations.
  • Compare the students’ assessment of their own learning and the DAS’s assessment of students’ learning and identify the level of agreement and differences. Offer plausible explanations for differences.
  • Offer plausible explanations for differences identified in analysis by center, mode of delivery, and number of modules completed.

Method: frequency, t-test of group differences, content analysis

Analysis and Interpretation: Academic Progress

Question 1: To what extent is achievement level in the Bridge Program associated with short-term (two terms) and long-term (until graduation) academic progress?

Analysis:

  • Compare mean post-assessment scores with mean GPA, SAP, grades and credits received for two terms after Bridge Program completion.
  • Compare mean post-assessment scores with mean GPA, SAP, credits received, graduation rate, and time to graduation.
  • Identify specific studies completed during two terms after Bridge Program completion and categorize in terms of writing and/or reading intensive.
  • Calculate the percentage of students completing Bridge Program work who scored meets expectations or higher.

Interpretation:

  • Calculate the level of correlation between mean post-assessment scores and GPA, SAP, grades and credits received for two terms after Bridge Program completion. Use as a benchmark.
  • Calculate the level of correlation between mean post-assessment scores and GPA, SAP, credits received, graduation rate, and time to graduation. Use as a benchmark.
  • Calculate the level of correlation between mean post-assessment scores and grades on studies in terms of their level of writing and/or reading intensiveness.
  • Compare the percentage of students who completed Bridge Program work with a rating of meets expectations or higher to the Bridge Program goal requiring 70 percent of students completing the Bridge Program to maintain satisfactory academic progress for two terms. Students with an SAP of “ good standing” will be considered to be maintaining satisfactory academic progress.

Method: frequency, correlation, content analysis

Question 2: What is the short-term (two terms) and long-term (until graduation) academic progress of referrals recommended for another form of academic support and those found not to require academic support?

Analysis:

  • Compare the mean GPA, SAP, grades and credits received by referrals in each category (question 2 above) with those who completed the Bridge Program with a rating of meets expectations or higher.
  • Compare the mean GPA, SAP, grades, credits received, graduation rate, and time to graduation for students each category with those who completed the Bridge Program with a rating of meets expectations or higher.

Interpretation:

  • Do a t-test on the group means to determine if differences between GPA, SAP, grades, and credits received by students (for the two terms following their decision) are statistically significant. A significance level of .05 or lower will indicate a significant difference between the means.
  • Do a t-test on the group means to determine if differences between GPA, SAP, grades, credits received by students, graduation rate, and time to graduation are statistically significant. A significance level of .05 or lower will indicate a significant difference between the means.

Method: t-test of group differences

Analysis and Interpretation: Program Implementation

Question 1: To what extent did centers implement the Bridge Program as planned?

Analysis:

  • Compare the delivery modes used at each center.
  • Compare mean post-test scores with center and delivery mode.
  • Ascertain the extent to which each student has a pre- and post-Bridge Program assessment score and that DAS worked through content provided in their ANGEL shell.
  • Identify parts of the Bridge Program DAS were not able to implement.

Interpretation:

  • Evidence should support multiple delivery modes across centers.
  • Each student should have a pre-score and all completers should have a post-test scores. Fifty percent of non-completers should have a post-test score.
  • Summarize parts of the Bridge Program DAS were not able to implement and reasons why DAS were not able to implement these parts.
  • Calculate the level of correlation between mean post-test scores by center and delivery mode.

Method: Frequency, T Test of group differences, content analysis

Question 2: What aspects of the Bridge Program implementation need improvement?

Analysis:

  • Identify the satisfaction level of DAS with the following aspects of program implementation: student selection, registration, Angel shell distribution, pre- and post-assessments, learning activities, and learning materials.
  • Identify the satisfaction level of students with the following aspects of program implementation: selection, orientation, assessments, instructional quality, and skill improvement.
  • Do a content analysis of open-ended comments of DAS and students regarding aspects of program implementation.

Interpretation:

  • Mean satisfaction level across centers on each aspect of program implementation should be at least 4.0 on a 5.0 point scale. Suggest revisions to the distribution and set up processes for any aspect with a mean satisfaction level below 4.0.
  • Rank order by frequency problems identified in open-ended comments.

Method: frequency, survey, interviews, focus group, content analysis

Reporting Evaluation Findings

The reporting schedule for this evaluation seeks to meet the information needs of key evaluation stakeholders (students, DAS, mentors, and administrators) in a timely fashion, starting with the evaluation design itself and ending with the final report of evaluation results. Because this is a formative evaluation, reports will be issued frequently and may be revised to accommodate unanticipated circumstances. Report formats will vary with the purpose of the report and the audience. Planned reports include mid-term updates, end of term progress reports, and a comprehensive final report covering the full period of the evaluation. Various derivative reports such as PowerPoint presentations will be created as appropriate to communicate with various audiences.

Report Schedule

EventWhenFormatContentAudience
Evaluation design draft mid-Sept.
08
written report detailed evaluation plan Assistant VP/AA for Academic Services
Revised evaluation design late-Oct. 08 written report revisions to detailed plan Assistant VP/AA for Academic Services
Additional revisions to evaluation design ongoing as conditions merit written report revisions to detailed plan Assistant VP/AA for Academic Services
Mid-Term 1 update mid-Oct.
08
memo participation
numbers
delivery
Assistant VP/AA for Academic Services, DAS
Term 1 2008 progress report Mid-January 09 written report
discussion
Term I findings and recommendations Assistant VP/AA for Academic Services, DAS
Mid-Term 3 update mid-Feb. 09 memo participation
numbers
delivery
Assistant VP/AA for Academic Services, DAS
Term 3 2009 progress report mid-June 09 written report
discussion
Term II findings and rec. Assistant VP/AA for Academic Services, DAS
Mid-Term 5 update mid-July 09 memo participation
numbers
delivery
Assistant VP/AA for Academic Services, DAS
Term 5 2009 progress report mid-Sept. 09 written report
discussion
Term I findings and rec. Assistant VP/AA for Academic Services, DAS
Mid-Term 1 update mid-Oct. 09 memo participation
numbers
delivery
Assistant VP/AA for Academic Services, DAS
Term 1 2009 progress report mid-Jan. 09 written report
discussion
Term I findings and rec. Assistant VP/AA for Academic Services, DAS
Final report daft mid-March 09 written report summary
conclusions
recommendations
Assistant VP/AA for Academic Services, DAS
selected administrators
Final report mid-June 09 written report
presentations
summary
conclusions
recommendations
Assistant VP/AA for Academic Services, DAS selected administrators
Highlights mid-July 09 Web announcements
newsletters
summary of results students
mentors

Managing Evaluation Activities

The Director of College Academic Support will be responsible for managing evaluation activities. The following table displays major evaluation tasks and indicates who will carry them out. A detailed work plan including subtasks and due dates will guide this work.

Evaluation TaskWho
Evaluation design Brigham
Mid-term information collection & updates Hilton
Brigham
Term information collection, analysis, and progress reports Hilton
Brigham
Kim (Yong)
Final report draft Brigham
Kim (Yong)
Final report Brigham
Kim (Yong)
Additional reports and announcements Brigham