Faculty Evaluation System Review of Related Literature

Faculty Evaluation System Review of Related Literature

General Overview

            The system was developed for the Information Technology Faculty , which is more cost-efficient and cost-effective. It will give way to easy collection and more accurate data analysis in lesser time. The evaluation integrates data from students, peers and administrators to improve meaningful evaluative information for both faculty use in self-improvements efforts and administrative use in making personnel decision.

REVIEW OF RELATED LITERATURE

Foreign Related Studies

Personnel review binders used for retention, promotion, and tenure decisions may go the way of the typewriter as electronic portfolio systems continue to gain ground as effective, paperless solutions. For the past three years, the University of Wisconsin-La Crosse used electronic faculty portfolios for all retention, promotion, and instructional academic staff rehiring decisions. The web-based system has not only simplified the process for faculty, but committee members find the system more convenient and efficient

Faculty Performance Evaluation in Accredited

U.S. Public Health Graduate Schools and

Programs: A National Study

To provide baseline data on evaluation of faculty performance in U.S. schools and programs of public health. The authors administered an anonymous Internet-based questionnaire using PHP Surveyor. The invited sample consisted of individuals listed in the Council on Education for Public Health (CEPH) Directory of Accredited Schools and Programs of Public Health. The authors explored performance measures in teaching, research, and service, and assessed how faculty performance measures are used.

A total of 64 individuals (60.4%) responded to the survey, with 26 (40.6%) reporting accreditation/reaccreditation by CEPH within the preceding 24 months. Although all schools and programs employ faculty performance evaluations, a significant difference exists between schools and programs in the use of results for merit pay increases and mentoring purposes. Thirty-one (48.4%) of the organizations published minimum performance expectations. Fifty-nine (92.2%) of the respondents counted number of publications, but only 22 (34.4%) formally evaluated their quality. Sixty-two (96.9%) evaluated teaching through student course evaluations, and only 29 (45.3%) engaged in peer assessment. Although aggregate results of teaching evaluation are available to faculty and administrators, this information is often unavailable to students and the public. Most schools and programs documented faculty service activities qualitatively but neither assessed it quantitatively nor evaluated its impact.

This study provides insight into how schools and programs of public health evaluate faculty performance. Results suggest that although schools and programs do evaluate faculty performance on a basic level, many do not devote substantial attention to this process. (http://www.ncbi.nlm.nih.gov/pubmed/18820530)

Evaluating Faculty Performance:

A systematically Designed and assessed approach.

The authors explain how the Department of Family Practice and Community Health (DFPCH) at the University of Minnesota School of Medicine has responded to the need to create for its faculty an evaluation system that provides information for both feedback and merit-pay decisions. The development process, begun in 1996, is described, and its present format detailed. Also presented are the results of a 1999 assessment of the system, which found high satisfaction among the faculty and the department head. In particular, this system has allowed the department head to have a more objective basis for making salary decisions, to increase his role as coach, and to commit more time to career correction and/or development. Other observed outcomes include an enhanced ability to track faculty productivity, increased clarity in organizational structure and goals, increased research productivity, and early retirement of senior faculty receiving low evaluations. The key components of the DFPCH system mirror recommended elements for the design of faculty evaluation systems offered by evaluation professionals. Specific elements that the DFPCH found critical to success were stable and supportive departmental and project leadership, supportive faculty, skilled staff, a willingness to weather resistance to change, tailoring of the system to the department’s specific needs and culture, and a willingness to allow the process to evolve. A key question that the evaluation system has evoked at the DFPCH is whether “merit” equals “worth”; that is, does the collective meritorious work of faculty members effectively address program and departmental goals? (http://www.facultyfocus.com/articles/faculty-evaluation/electronic-faculty-portfolios-can-streamline-personnel-matters/)

Computerized Employee Evaluation processing

Apparatus and method  

According to Bradshaw; William Brent (Pleasant Grove, UT), computerized employee evaluation processing apparatus and method is a method for computerized industrial process control provides computers networked to communicate with one another. Each computer active in the system is responsible for at least a portion of the process and at least one decision for a process to be controlled and having an output. All activities are characterized by type, the types of activities forming a universal set including sensing facts, linking facts into a meaningful context, and evaluating meaning to formulate a decision. An entity responsible for an assigned decision conducts a series of activities selected from the three types, which may be applied recursively. Decisions are communicated between computers through the system to control the process. Producing output from the process follows according to a combination of decisions reported from each computer corresponding to a responsible person or other entity. In various embodiments, the process control may be hardware product development, manufacturing, chemical composition processing, or data collection and processing such as from instruments and machines or computerized information processes including employee evaluation.

Issues in Developing a Faculty Evaluation System.

The increasing demands for accountability in higher education are resulting in calls for important personnel decisions–such as promotion, tenure, pay, and continuation–to be based directly on the outcomes of systematic faculty evaluations. This article provides a step-by-step procedure for developing a fair and meaningful faculty evaluation system on which such personnel decisions can be based. The procedure systematically involves faculty and administrators in the design and development of a faculty evaluation program that reflects the unique values, priorities, and heritage of an institution. The resultant faculty evaluation system integrates data from students, peers, and administrators to provide meaningful evaluative information for both faculty use in self-improvement efforts and administrative use in making personnel decisions that are based on a valid and reliable faculty performance record.

Using a Faculty Evaluation Triad to Achieve Evidence-Based Teaching.

An effective and comprehensive faculty evaluation system provides both formative and summative data for ongoing faculty development. It also provides data for annual faculty evaluation and tenure and promotion decision making. To achieve an effective system, a triad of faculty evaluation data sources–student ratings, teaching portfolio, and peer evaluation–was developed. Concurrently, a system of faculty mentorship was implemented, as well as an administrative structure to effectively use data to assist in merit pay and promotion decisions. Using a comprehensive, evidenced-based system to document, analyze, and improve teaching effectiveness is essential to assuring excellence in teaching and learning.

LOCAL RELATED STUDIES

Computerized faculty evaluation system on next semester:

University of SANTO TOMAS

GONE are the times when students shade cards to evaluate their professors, as the new online evaluation will be introduced this coming semester.

Evaluators, composed of students and faculty members’ superiors, will occupy computer laboratories within their respective faculties and colleges at a given time for the online evaluation, according to project manager Rowella Raymundo.

They will then be given a username and a password to be able to log-in to the website, said Jaime Dolor, Jr., program webmaster.

Just like in the old system, evaluators assess a faculty member’s performance based on different criteria with ratings from one (very poor) to five (very good). Once done, one will be able to view the results of one’s evaluation of a faculty member.

The University decided to shift from manual to online evaluation because the latter is more “cost-efficient and cost-effective” said Prof. Jaime Dolera, Jr., Administrator for Software Development and Data Processing.

The program, through the initiative of Vice-Rector for Academic Affairs Dr. Armando de Jesus, will give way to easy collection and more accurate data analysis in lesser time, added Dolera.

Accurate computation of results is expected since the program tabulates the data. This lessens the chances of human error, said Dolor.

“The new system is better because confidentiality is maintained. As soon as the evaluators finish with the evaluation, the information they have encoded will be sent to a database which only the Office of the Vice-Rector for Academic Affairs (OVRAA) has an access of,” Dolor said. Not everyone can view the over-all results.

Evaluators will only be able to gain access to the program using University computers since it is only available in Intranet, or within the University’s network. This is to minimize the chances of virus attacks and program hi-jackers. The online evaluation, (http://www.highbeam.com/doc/1G1-116538716.html)

PRIOR ART

A FACULTY Evaluation Model for Online Instructors:

Mentoring and Evaluation in the Online Classroom

The rapid growth of online learning has mandated the development of faculty evaluation models geared specifically toward the unique demands of the online classroom. With a foundation in the best practices of online learning, adapted to meet the dynamics of a growing online program, the Online Instructor Evaluation System created at Park University serves the dual purpose of mentoring and faculty evaluation. As such, the model contains two distinct phases of interaction: formative reviews and a summative evaluation. Beyond its critical role in instructor retention, program assessment, and accreditation, this faculty evaluation system signals the University’s commitment to ongoing professional development. The Online Instructor Evaluation System maximizes the potential of faculty evaluation to inspire reflection and growth; encourages the persistent professional development needs of online instructors; emphasizes the process of teaching as well as product; incorporates multiple perspectives to capture a comprehensive view of instructor performance; and educates key on-ground university constituents about online learning.

In the infancy of online instruction, considerable emphasis was given to demonstrating equivalence between online and traditional face-to-face instruction. This movement extended from pedagogy to evaluation as many online programs mirrored established face-to-face processes for faculty evaluation when creating models for the virtual classroom. With the rapid growth of online learning, these early evaluation models have revealed limited relevance to the online environment both in content and implementation. To address the ineffectiveness of traditional faculty evaluation models for use with online faculty, as well as to contribute to the growth of online learning as a field (and not simply a practice), innovative faculty evaluation models that are geared specifically to the unique demands, expectations and requirements of modern online learning must be developed.

The evaluation model for online faculty at Park University was created to meet the unique demands of an evolving online program. While Park University was founded as a small, private liberal-arts college in 1875, the original campus has grown to include graduate programs, 42 nation-wide campus centers, and an extensive online program supporting 45,000 annual student enrollments in seven online degree-completion programs and four fully-online graduate programs. Park University’s culture is as a teaching-oriented institution, with emerging expectations for faculty scholarship, research, and service. The institutional complexity at Park University samples challenges found across a host of institutions targeting 2- or 4-year degrees, public or private settings, and traditional or adult student populations. As such, the University’s online faculty evaluation model is potentially translatable to an equally wide range of higher-learning institutions. With the increasing popularity and growth of online learning, it is essential to establish clear, direct, relevant guidelines for evaluating online faculty that maintain instructional quality and promote best practices in online education. (http://www.westga.edu/~distance/ojdla/fall83/mandernach83.htm)

Synthesis

As part of the educational assessment process, faculty evaluation attempts to assess and quantify the effectiveness of teaching professionals. Their performances are the basis of the administrator for retention, promotion and for other decision making. Through computerization, we can achieve our goals in providing accurate and reliable results in lesser time.

According to Bradshaw; William Brent (Pleasant Grove, UT), computerized employee processing apparatus and method is a method for computerized industrial process control provides computers networked to communicate with one another. Each computer active in the system is responsible for at least a portion of the process and at least one decision for a process for a to be controlled and having an output. All activities are characterized by type, the types of activities forming a universal set including sensing facts, linking facts into a meaningful context, and evaluating meaning to formulate a decision. An entity responsible for an assigned decision conducts a series of activities selected from the three types, which may be applied recursively. Decisions are communicated between computers through the system to control the process. Producing output from the process follows according to a decisions reported from each computer corresponding to a responsible person or other entity. In various embodiment, the process control may be hardware product development, manufacturing, chemical composition processing, or data collection and processing such as from instruments and machines or computerized information processes including employee evaluation.

, , , , , , , , , , , , , , , , , , , , ,

Post navigation