Competition in education and research brought through the evaluation system may drive all in the same direction at the cost of diversity
Feedback is a very important tool to nudge people and organisations to adopt desirable behaviour. Nobel laureate Richard H Thaler and his co-author Cass R Sunstein, in their international bestseller Nudge, suggest feedback as one of the strategies to motivate agents to adopt responsive behaviour. It is against the idea of command and control policies of governments or paternalism of any institution. Nudging human behaviour in a desirable direction without any command and control is what they call “libertarian paternalism.” A nudge in the right direction may be as simple as the laptop warning the users to plug in the charger when the battery is about to die out, or the display screen of a car suggesting that the driver change gears when the gear applied and the speed of the car mismatch. These feedback mechanisms are alarms which nudge people to take corrective measures.
Education, being delivered by organisations, Government or private, benefits from feedback to spearhead in the intended direction. It may be feedback on the course, faculty or educational institute. It helps enhance performance and improve the delivery of education service through the voluntary adoption of corrective measures. In higher academics, the ranking of journals, again based on the feedback on the quality of research work published, is an important mechanism to improve research and publication. Feedback, when made public, increases competition among peers. Then comment works as a mechanism to remove the asymmetry of information in the market. The potential customers or beneficiaries become aware of the quality of goods or services offered. Different agents or stakeholders give comments for all elements of education and research. On the course and faculty, it is students who provide the feedback. It is meant to improve the course content and delivery of the faculty. Educational institutes are given ratings by different agencies, including the Government, national and international bodies and media about their infrastructure, processes and quality of education. The assessment of research journals is obtained by the number of citations of research articles published in them over a stipulated period. In all this, the moot question is how far does the feedback mechanism serve the purpose of delivery of education services in the desired direction?
If we consider that the feedback on the course and faculty is given by the students, then it may be counterproductive. The desired pattern of delivery may not be best determined by students as they are not competent enough to assess. Nevertheless, many renowned educational institutes use their feedback to evaluate faculty performance. It is even considered for promotions. However, there are exceptions. Harvard Business School does not take student response on any course or faculty. When asked about it, one tenured professor replied that “we do not take feedback from amateurs.” If they have to assess a course or faculty, some experts of the area attend the class and appraise the course delivery.
Research is an extremely complicated output which is determined by the methodology, results and overall interest on a particular topic. The citation of the articles may depend on all these factors. The journals in the area of social sciences and management at times may prefer publishing certain types of results. Journals may aim at increasing citation and hence prefer the articles which deal with subjects that are likely to have enough research funding in future. New ideas or results which contradict some existing dominant idea may not receive enough funding and attention. Thus, it creates an endogenous system which encourages a dominant idea and is detrimental to newer, provocative ideas.
This problem is more severe for lesser-known institutes from developing countries. Each research article goes through a peer-review process conducted by the journals. The editors take a decision on publication after taking into account the reviewers’ comments. Nevertheless, the reviewers’ performance is not predictable. In a 2007 study on 306 experienced reviewers, published in PLOS Medicine, researchers found that there is no scientifically-established predictor of reviewer performance. Hence it is not possible to systematically improve the selection of reviewers and implement a routine review rating system.
Sadly, journals do take reviewer ratings from the editors. Furthermore, journal editors may find articles with a very new or provocative idea or result contrary to dominant ideas unacceptable, more so when the researchers are affiliated to renowned institutes, or they themselves are well-known. Hence, the feedback process in research may not always encourage path-breaking discoveries, especially for developing nations.
Ranking or rating of educational institutions is considered as a way of giving feedback on the performance of the institute on certain predetermined indicators. Over the years, ranking and accreditation have gained strength and momentum globally, including in India. Ranking is perceived as an indicator of quality of services offered by educational institutions. There seems to a be growing consensus that ranking influences the perception of stakeholders (students, recruiters and investors) about the prospect of educational institutions. While there is no denying that ranking has made institutes look at the quality of services, it also introduced new practices within the sector. From the viewpoint of organisational research, ranking has offered a new template to educational institutions and codified them in different categories. Post the ranking announcement of the National Institutional Ranking Framework (NIRF), it has been observed that many of the educational institutes have showcased their positions on their websites to demonstrate their skills, achievements and desirability to stakeholders.
The organisational template propagated through ranking carries its own characteristics. For example, under the NIRF, the template is assessed through five parameters focussing on teaching, publications, consultancy, employability and overall perception. Institutes are measured along these parameters to identify the “best” ones scoring the highest marks/points across these parameters. Going forward, these institutes would become a role model and irrespective of their individual values, purpose and origin, all would be in a race to adopt a codified organisational template. This would have a detrimental impact on institutions striving to pursue a niche domain. The codified organisational template would often fail to recognise the unique features of educational institutions by virtue of their values and origin. As a result, such institutes would often fall behind in the so-called performance indicators, creating a poor impression about the quality of education imparted by them. This, in turn, would have a detrimental impact on their ability to attract resources and eventually lead to quivering of the very existence of individuality among organisations.
As the ranking is made public, this feedback mechanism ignites fierce competition among the educational institutes. The urgency to perform well in the ranking exercise has resulted in many adopting the recommended organisational template in a hurried manner. The high-speed diffusion of the template is often facilitated by a new breed of “institutional intermediaries” i.e. entities helping organisations to build capacity so as to adopt the new template. In recent years, the ranking industry in higher education has been populated by intermediaries certifying institutions through their own ranking exercises. Their role was primarily limited to assessment of quality on indicators. We should now expect to see more intermediaries who would be helping the educational institutions to build their capacity to perform well in rankings and adopt a standardised template.
The feedback mechanism should nudge desirable behaviour, but it may be counterproductive to education and research when that feedback is made public. Then it becomes a means of increasing competition in a particular direction. Two major problems in the evaluation mechanism in education have been identified. One, when feedback is taken from those whose expertise, capability or eligibility to provide an assessment is questionable. A difficult subject would be eventually dropped from the curriculum or a strict instructor would be penalised. Bias in the assessment of a new idea or contradictory results in research may throttle publication in journals. Second, when assessment is based on a standard set of criteria and is made public, then it nullifies the emergence and growth of educational organisations with diverse ideas and objectives. Competition brought through the feedback system may drive all in the same direction at the cost of diversity.
(De is Associate Professor and Sarma is Assistant Professor, Institute of Rural Management, Anand. Views expressed here are personal)