AustLII Home | Databases | WorldLII | Search | Feedback

Legal Education Digest

Legal Education Digest
You are here:  AustLII >> Databases >> Legal Education Digest >> 2009 >> [2009] LegEdDig 25

Database Search | Name Search | Recent Articles | Noteup | LawCite | Author Info | Download | Help

Arup, C --- "Research assessment and legal scholarship" [2009] LegEdDig 25; (2009) 17(2) Legal Education Digest 34


Research assessment and legal scholarship

C Arup

18 Legal Educ Rev, 2008, pp 31–54

On election, in 2008, the new Australian Government put a stop to the Research Quality Framework (RQF) which was gearing up for its first assessment round. Nonetheless, the Government has since announced that its place will be taken by a new research quality and evaluation system, the Excellence in Research for Australia initiative (ERA). ERA will depart from the RQF in certain respects and some useful comparisons can be made. However, the essential issue remains the nature of assessment and in particular the combination of publication metrics and peer review.

If the effect of the RQF or the ERA was to raise research quality across the board, it would be worth the considerable effort and law academics could embrace it warmly. Instead, if there was to be a concern, it would be that the system had narrowing or channelling effects that actually undermined research quality. It would work against variety, innovation or collegiality in research. There would also be a fairness check: a query whether the system really did make judgements on the basis of merit. Overall, this article is not hostile to the idea of a research assessment exercise, but it takes the view there are some critical choices to be made, particularly about the nature of the measures that are to be employed to strike the ratings.

Once, the work of academics in the humanities and social sciences was funded almost entirely from salary. That salary included a component for time spent on research (roughly 30 per cent), together with regular sabbaticals and any extra time individuals put into their own research (‘after hours’). Those staff members who had tenure were left largely alone to choose their own projects. Research was still requisite, certainly to obtain promotion, and individuals took their own initiative to devise projects that would contribute to knowledge and woo publishers.

Starting with a Labor Government, under the direction of the Education Minister, John Dawkins, but stepping up considerably in the 1990s, the universities have been drawn into the public sector management revolution. Combining neo-liberal economic principles with the new management sciences (especially accounting and human resource management), this system pursues goals, applies incentives, makes measurements, encourages competition and requires accountability. Productivity, excellence and relevance are among its key values. While positive in many respects, the regime does seem to withdraw the trust and autonomy that characterised the funding of the academic profession in earlier days.

Academics must juggle roles. They are salaried knowledge workers toiling in a changing production system and labour market for research. They are also energetic careerists and entrepreneurs in an expanding field of intellectual and social capital. Given such a mix of incentives and ambitions, it is likely a selective measurement system – one that favours a distinctive kind of quantity or quality, for instance – will have an influence on research.

The recent stress on quantity has had its own effects. I would put aside the very real problem of encouraging and assisting those who do not write for publication at all, instead focusing here on those who are productive. The stress affects rate of publication, principally. However, style and format are influenced too, as publications in law begin to see, for example, work split into smaller bits, articles with multiple authors rather than sole-authored or co-authored pieces, and even different versions of the same thing.

The shift to quality seems a welcome relief from this feverish pace. Yet it, too, is likely to have its preferences. The RQF’s brief was to assess the quality and impact of research. ERA puts it slightly differently, specifying that its interest is in research excellence. The main difference will be in its greater reliance on metrics over peer review as the method of assessing these attributes. ERA metrics are likely also to favour certain kinds of research over others.

Are the preferences identified at the level of definitions? The RQF Submission Specifications gave a definition of research quality. It was defined as referring to the quality of original research. In contrast, the ARC consultation paper does not offer a definition of research quality. Instead, consistent with the greater emphasis on metrics, it identifies indicators of research quality to include ‘publications analysis (ranked outlets, citation analysis and percentile analysis where relevant) and research income awarded on the basis of peer review’. It adds a category of research activity and intensity, where measures are to include research income, HDR student load and completions, and staff full-time equivalence (FTE) data. It suggests that HDR loads and completions can be a measure of research quality too. However, even though metrics are to play a bigger part in ERA, the use of indicators is to be combined with expert review by research advisory committees. Overall, the research has to be benchmarked as international, national, emerging or not competitive. Finally, there is a judgement to be made about the weight to place on the various metrics.

If much of the exercise is about defining quality, such assessments can present a more fundamental challenge for the law discipline. Because of the definition they give to ‘research’, some scholarly activities and outputs will be excluded from consideration altogether.

It is worthwhile making some observations about the diversity of legal research in order to suggest which might be favoured by an assessment exercise. The Council of Australian Law Deans (CALD) has been at pains to point out that legal research embraces a variety of approaches. Their 2005 statement lists 10 approaches: doctrinal, theoretical, critical/reformist, fundamental/contextual, empirical, historical, comparative, institutional, process-oriented and interdisciplinary.

If legal research is to be useful in instrumental terms, the law needs to be tested empirically through the investigations of the social sciences. Often this means taking a greater interest in legal practices and legal impacts.

Qualitative research is valuable. Legal research has been moving in this direction. The competition for ARC grants has been one driver, most directly because this kind of research readily justifies the expenditure of money. In terms of its appeal, it looks most likely to guarantee the sponsor a concrete finding or a useful recommendation.

Once, the concern was not enough instrumentalism; too much talk about law was rhetorical. Now could there be a danger of too much instrumentalism? Assessors should realise that legal researchers are almost always going to give attention to the peculiarities of law, to think that the discourses and processes within law matter, even when they are also looking at law as an economic or social phenomenon. Sponsors want to see if and how law can produce an economic benefit or solve a social problem, or even just act effectively as an instrument of regulatory policy. Yet the role of law might be to provide a forum or a medium in which the nature of those benefits or problems is contested or a policy is resisted.

A crucial consideration is the kind of legal research that finds favour with the top ranked journals and publishers.

Peer review is sometimes accused of subjectivity. The Australian Research Council chair Professor Margaret Sheil has indicated that steps will be taken to prevent small groups of reviewers becoming too influential when the systems for awarding research grants and judging research quality converge within the Council.

Peer review also has a workload challenge.

Potentially, the workload for ERA is even higher, for it aims to evaluate all the research publications of all the academic staff in each discipline cluster during the reference period.

The ARC consultation paper says that peer review of a sample of outputs may be required ‘where there are no appropriate indicators for the discipline or information about the outputs is not captured by the indicators being used for the discipline (eg. the majority of research outputs are not indexed by the citation data supplier)’.

High-volume decision making systems are drawn to measures that appear objective. Often these measures are quantitative. Would the RQF (now the ERA) metrics be mere proxies for quality? The focus here is on the ranking of research publication outlets. Ranking involves: (a) deciding which kind of outlets will be counted at all; (b) establishing a hierarchy of outlets (journals versus books and so on); and (c) ordering a particular type of outlet in tiers (eg, top-tier journals through to the also-rans).

The focus so far has been on ranking journals. The drive within DIISR is for single journal ranking systems – in the shape of a pyramid, with only a very few journals in the top tier (percentages).

Law academics can agree that there are well established journals – journals associated with prestigious law schools both local and abroad. The prestige of local schools would likely place such reviews as the Melbourne University Law Review, Federal Law Review, Sydney Law Review and Monash Law Review (all very good journals) in the small top tier.

Should the ranking system go behind the title to look at quality control and who edits and publishes in the journals? One complication for the law discipline has been the practice of student editors, at least for law school reviews in the United States and Australia. This peculiarity has been smoothed out somewhat, with faculty advisers taking a more active role and articles routinely being sent to external referees.

Who, then, would rank the journals independently if they were to be based on the prestige of the title? It would be a difficult task for a group of law deans collectively and, naturally enough, each university might be inclined to favour its own review.

Furthermore, many of the different fields of law boast their own subject-specific journals now. A single ranking system would not only need to order the law school reviews, it would have to rank the generalist law school reviews against such subject-specific law journals as well. It is important to note that the ERA indicator will not just highlight a few ‘top-tier’ publications. It will rank most other publications as low or despatch them to an also-ran category of not ranked.

Due in part to DIISR type requirements, most journals are refereed now, or they will not receive submissions from academics.

The general point is the lack of an obvious hierarchy in law. If the focus is on who publishes where, the finding is that law academics spread their work around.

If metrics have their uses, submission and rejection rates would seem to offer more objective data than perceptions of prestige.

What would be a way around these issues? If academics accept the idea that they are all swimming in the same pool, the short answer might be a simple accord. Collectively, the system strikes a compact on a hierarchy of journals – almost any hierarchy would do. Thus, by fiat, certain journals would become the top journals and law academics could now compete to be published in them. Yet, surely these rules of the game would need to be prospective: it would not seem right, if the stakes were high, to make them the measure retrospectively.

Another dilemma is provoked by the lesser importance that the law discipline has attached to journal publication compared to other disciplines. Law academics publish books. Frequently, they are cases and materials or textbooks, but law academics also publish research monographs and chapters in edited collections of papers. The concern is that book publishing will increasingly defer to the publication of journal articles.

A research assessment system discriminates against certain types of books. Publishers are apprehensive that RQF- and ERA-type systems will discourage textbook writing.

The risks are not confined to texts. Finding a publisher for a research monograph is not easy, given the economics of the publishing industry.

Book publishing also presents a ranking challenge. Would those rankings be based on prestige, the refereeing process or the difficulty of obtaining a contract? Like journals, the outlets are diverse. This discussion has concentrated on publication rankings. Metrics includes other measures such as citation. Citation counts can cut through the publication rankings issue by going straight to citations for individual papers, though citation rates can also be used as a means to rank the journals.

The ARC consultation paper says that ERA will consider a number of indicators of research quality, with a particular focus on research publications and bibliometrics, including ‘profile of citations against relevant Australian and worldwide benchmarks where relevant and available’ and ‘centile analysis of publications against most highly cited world papers where relevant’.

Citation rates are also affected by the accessibility of the original work, which takes us to the issue of the range of library holdings and even the bibliographic databases. DIISR has commissioned the very proficient AustLII team to work on a system of collection for law. The citations that are contained in books are especially hard to collect. In the law discipline, academics read books, but books are not so easy to find from electronic databases and certainly to read online.

The ARC consultation paper signals that research income will be among the measures of research activity and the indicators of research quality for ERA Research income will be collected according to the HERDC categories 1–5.54. In law, though, the spread of these grants has been quite narrow, so the metrics should largely act to identify some very good researchers, predominantly in the Group of Eight (Go8) universities. If the ARC favours research promising a discrete, practical result, it is likely to push it towards the instrumental end of the spectrum (see above).

ERA makes HDR student loads and completions a measure of research activity and intensity. At least their measurement will acknowledge the research intensity within certain law schools where PhD enrolments are strong.

Such systems have consequences for the distribution of rewards and advantages throughout the sector.

The clearest consequence for the individual academics is the ratings they receive. If the assessment is important to the institution, then those individuals who rate highly will be rewarded, subject to the flexibility that institutions have to vary, for instance, teaching and administrative loads, remuneration, or promotion criteria. Such reward would seem to be a goal of the system.

The ARC consultation paper now says that ERA will not determine the allocation of research block grants. In any case, law schools earn most of their income from teaching. Arguably, the highest stake is reputation. Reputation is an intangible, though it can have a major impact on postgraduate student enrolments, fields for appointments to staff, and success with competitive grants and commissions for research.

Recruiting high performers is an obvious response. Moreover, such competition can have cumulative effects as well. For example, senior researchers and good infrastructure attract grants and higher-degree research students. Grants fund research fellowships to work full-time on research and writing for publication. Sometimes the fellowships release senior staff from teaching and administrative duties. Increasingly, together with studentships, they bring in excellent new staff who complete at least the initial research and writing for co-authored publications. However, all law schools will be reformed somewhat if research methods move closer to the science model.

Behind this trajectory is the thinking that research benefits from groupings within the one institution (critical mass, cross-fertilisation, economies of scale and scope) and that a small country can really only be expected to support a small number of excellent research schools. In this way, ratings levels (five-star, four-star and below) or benchmarking distinctions (international, national, emerging and non-competitive) would be another factor at work in the shake-out of Australian law schools – a shake-out that has been looming since the numbers of schools swelled so much.

Other factors are at work too, such as the freedom to charge full fees if the market will bear it. At the end of this road is the prospect of an accreditation and stratification system that is based on resource and performance measures like the American Bar Association system in the United States. While prestige varies, formally speaking all law schools are equals in Australia. Academics should expect basically the same working conditions whichever school they are appointed to. That equality has frayed around the edges in terms of, for example, teaching loads and support services, but each school can claim to do the same thing, and the Council of Australian Law Deans have treated issues like the RQF and ERA gingerly, precisely to avoid fuelling divisions in the ranks.

The ARC consultation paper also says that it is important that ERA encourages collaboration across institutions. Cross-institutional outputs can be submitted by each institution involved provided they are appropriately identified.

A division may emerge between the schools that educate graduates for elite national and international commercial work and those that educate for the local small business, household, criminal and public-interest practices. However, even such strategies are problematic.

If the thinking behind research concentration is basically sound, policy analysis should still balance the consequences for the rank-and-file schools if a select few become research elites. Those other schools in the ruck would still be expected to meet the basic requirements of a respectable law school. The current professional requirements (the eleven areas of knowledge all Australian States accept as compulsory for admission to practise – the Priestley 11) mean that all schools must teach credibly across a spectrum of private and public law subjects.

Would not the task of the rank-and-file schools be made even harder if they face an operational cycle of staff turnover – for instance, seeing young stars develop their profile and then leave for an elite institution? If it is accepted that teaching should be research based, these schools might have trouble keeping honours and postgraduate students.

More to the point, perhaps, is the treatment of the students in the rank-and-file schools. It might be more important to the quality of legal services if these students have exposure to teachers who are also scholars and researchers than that the standards of the elites increase even more. How can Australia get universities into the international rankings while ensuring the base is sound too? Of course, these dilemmas only lead into larger questions about education policy: how much to spend on law compared to other disciplines, on university graduates compared to technical training or basic literacy.

It is certainly hard to argue with quality and one should not overstate the impacts that research assessment might have. In many areas of the law, research is strong in Australia.

The concern would be if any system undermined the diversity and wisdom the discipline currently displays. While a research quality assessment system would stimulate some worthwhile research (that otherwise might not have appeared), the uncertainty is whether it will discourage the kind of long-term investment and collaboration that builds real scholarship in the discipline. It is likely that the safeguards lie in the design of the system and not in its resistance altogether.


AustLII: Copyright Policy | Disclaimers | Privacy Policy | Feedback
URL: http://www.austlii.edu.au/au/journals/LegEdDig/2009/25.html