AustLII Home | Databases | WorldLII | Search | Feedback

Federal Law Review

Federal Law Review (FLR)
You are here:  AustLII >> Databases >> Federal Law Review >> 2005 >> [2005] FedLawRw 4

Database Search | Name Search | Recent Articles | Noteup | LawCite | Help

Edmond, Gary --- "Judging Surveys: Experts, Empirical Evidence and Law Reform" [2005] FedLawRw 4; (2005) 33(1) Federal Law Review 95

JUDGING SURVEYS: EXPERTS, EMPIRICAL EVIDENCE AND LAW REFORM

Gary Edmond[*]

1 INTRODUCTION

This article examines the conduct of empirical legal research and its relationship to law reform. Through a detailed analysis of the largest survey of State and federal judges conducted in Australia it explores some of the limits to empirical investigation, particularly the tendency to rely primarily on judicial perspectives as the basis for law reform. Focusing upon empirical legal research on the subject of expert evidence the article initially examines research methodologies, then extends the analysis to consider the correspondence between the collection, interpretation and presentation of empirical data and recommendations for legal change. This involves an assessment of a broad range of methodological and theoretical issues with implications extending well beyond the particular survey. Last, the empirical research on expert evidence will be evaluated using the principal reform proposal suggested by the investigators. This exercise will provide an indication of methodological problems which beset the survey and demonstrate practical limitations with the particular approach to expertise.[1]

Australian Judicial Perspectives on Expert Evidence: An Empirical Study and Australian Magistrates' Perspectives on Expert Evidence: A Comparative Study report the results of empirical studies of judicial and magistrates' attitudes to experts and expert evidence in Australian legal settings.[2] In the ensuing analysis the principal findings from Australian Judicial Perspectives on Expert Evidence (hereafter 'Perspectives' or the 'Report') will be reviewed. In exposing problematic assumptions and questionable methodological practices associated with the research project this article aims to re-assess several of the principal, and widely endorsed, findings. Specifically, it questions the empirical basis for the following claims:

1. That bias is the most serious problem with expert evidence.

2. That problems with expert evidence warrant the reform of existing rules and procedures.

3. That judicial officers (and the public) are in favour of immediate and/or fundamental law reform.

4. That the proposed Declaration (or similar reforms) will relieve problems attributed to expert bias or partisanship.

2 PERSPECTIVES: AN OVERVIEW

This section provides a succinct overview of the survey, its origins, findings and recommendations.

A Background

The survey of Australian judges emerged from concern with the limited amount of information about the views of judges on the role played by expertise in Australian legal systems.[3] Conducted under the auspices of the Australian Institute of Judicial Administration (AIJA), the study was intended to remedy this deficiency.

B Methodology

Perspectives focuses on the opinions of Australian judges (and magistrates):

Judges are in a unique position to contribute an informed perspective on the way in which expert witnesses function within the adversary system. However, until now comparatively little has been known about the views of judges in Australia or internationally in relation to expert evidence and expert witnesses. [4]

All other perspectives are marginalised.[5]

Prior to distribution the investigators developed a prototype survey and conducted a pilot study. A revised instrument was distributed by mail to all Australian judges, with the AIJA formally endorsing the project through the provision of a covering-letter. The authors of the Report are a barrister (Freckelton), an academic psychologist (Reddy) and a legal academic (Selby). Their combination facilitates, in conjunction with the dearth of extant materials, the presentation of the results as a timely and comprehensive interdisciplinary study.[6]

The Report presents the results of the survey of Australian judges. Slightly more than 50 per cent of Australian judges (51%, n=478) responded.[7] The investigators expressed a primary interest in judges with trial experience as judges. This, they suggest, raises the response rate closer to 60 per cent. The questionnaire adopts the form of a multiple-choice survey with some space allocated for written comments. Several questions requested elaboration or greater specificity.[8] The survey was anonymous and the quantified data are presented as the empirical basis for the findings in the Report.

C Survey findings

According to the Report, the 'three key outcomes' are: 'a recognition by judges of the value of clarity of explanation, a quest for reliability in both opinions and their proponents, and a desire that the courtroom be a truly effective accountability forum'.[9] These results are consistent with the authors' espoused desire to improve the standard of expert evidence appearing in courts. Having affirmed the (increasing) importance of expertise in modern litigation the Report identifies a range of serious problems disrupting contemporary practice. Chief among these problems is 'bias': 'witness partisanship [is] the issue that most troubles Australian judges in relation to expert evidence'.[10] Though, complexity, comprehension, communication, advocacy and the lay jury, to varying degrees, also warrant attention.

The prominence of problems with expert evidence, particularly problems attributed to bias, lead the authors to recommend immediate law reform.

D Reform Implications

On the basis of the survey results Perspectives advances law reform. Changing the 'culture of partiality' among experts and eliminating bias are at the forefront of the recommendations.

Having empirically established the existence of a range of serious problems, the Report is perhaps disappointing in the number, variety and originality of recommendations. Confronted with partisan expert culture, serious communication and comprehension difficulties, routine exposure to complex evidence and a lack of preparation and skill by advocates, the actual proposals might be considered quite modest in their compass. They include: requiring experts to make a formal declaration—substantially similar to a range of recent reforms adapted from the English modifications to civil procedure by several Australian jurisdictions; some further (informal) mechanisms for training lawyers and experts; and greater recourse to audiovisual technologies. The expert Declaration, however, is the only reform proposal elaborated in any detail. (The Declaration is reproduced in Section 6.A.i and discussed in Section 7, below).

The remainder of this article is dedicated to analysis. The design and interpretation of the survey, the existence and extent of problems, assumptions about the nature of expertise as well as the need and value of the proposed reforms for policy and practice will all be examined in detail.

3 BIAS: (DE)CONSTRUCTING A PUBLIC PROBLEM

Presented as an impartial empirical account of judicial attitudes, Perspectives purports to have identified the major contemporary problems with expert evidence in Australia. The Report leaves the reader in little doubt that expert bias (or partisanship) is the most significant problem.

This section examines the construction of 'bias' as an empirical legal problem. This involves an assessment of whether the survey results support the contention that bias is the most serious problem with expert evidence and whether bias should even be characterised as a serious legal problem.[11]

A Ascertaining the 'most serious problem'

Unremarkably, the vast majority of surveyed judges appear to have considered expert evidence useful for fact-finding. Most respondents (Question 2.1) indicated that experts were useful 'often' (69%) or 'always' (13%).[12] Having established the (potential) utility of expert evidence, the survey immediately turns to inquire about a range of problems conventionally associated with experts and the provision of expert evidence. These 'problems' include: bias; use of language that is difficult to understand; failure to stay within parameters of expertise; non-responsiveness to questions; failure to prove the bases of opinions; cross-examination not making the expert accountable and the complexity of evidence (Questions 2.2 and 2.4).[13] Unfortunately, the design of the survey precludes a straightforward or methodologically persuasive evaluation of the relative significance of these purportedly serious problems.

Of all the problems, however, 'bias' is consistently presented as the most serious and detrimental.[14] For example: 'The answers … confirm witness partisanship as the issue that most troubles Australian judges in relation to expert evidence'.[15] And,

[m]any judges who responded to the survey identified partisanship or bias on the part of expert witnesses as an issue about which they were concerned and in respect of which they thought that there needed to be change. They did so directly in their answers to questions and also in their comments about experts' lack of objectivity. Many raised the issue more than once in their responses. The picture painted by a significant cross-section of respondents was one of worry about an unacceptable culture in sectors of disciplines providing report-writers and witnesses to the courts. The culture, they asserted, does not adequately value and put into practice independence, objectivity or transparency of opinions.[16]

These extracts, and others like them, provide a sense of the recurring significance accorded to bias, partisanship and a lack of objectivity among experts in the Report. Yet, notwithstanding these claims, the survey reveals little about the relative seriousness of bias or its relations with the other problems attributed to the use of expertise.

One of the major difficulties with the authors' interpretation of the survey stems from the inability to meaningfully compare or assess data elicited from different questions. This is a consequence of survey design. Because many of the issues are treated discretely it is difficult to ascertain their relative significance, their interdependence or whether, when weighed against other features of the adversarial system, they should be regarded as (serious) problems. Consequently, notwithstanding the characterisation of bias or partisanship as the most serious problem, several problems might be understood to compete for the status of the most serious problem associated with the admission, presentation or assessment of expert evidence.

The disparate treatment of problems is reflected in the following array of questions:

Q2.2 Have you encountered any of the following problems with expert evidence?
Q2.3 What is the single most serious problem you have encountered with expert evidence?
Q2.4 Have you encountered evidence from experts which you were not able to evaluate adequately because of its complexity?
Q2.11 From the following list please circle the one which you consider to be the single most persuasive factor when an expert is giving oral evidence.
Q3.1 How effectively do most advocates appearing before you elicit oral evidence-in-chief from expert witnesses? (and Question 3.3)
Q3.7 Have you experienced difficulty in evaluating the opinions expressed by one expert as against those expressed by another? (and Question 3.9)
Q5.7 Should matters involving complex and conflicting medical evidence be withdrawn from juries and be determined by judges alone or by some other means? (and Questions 5.5 and 5.6)
Q6.4 Is the courtroom a forum in which the reliability of expert theories and techniques is adequately evaluated?
Q6.7 Do the same expert witnesses appear regularly before you for the same side in litigation?
Q6.8 Have you encountered partisanship in expert witnesses called to give evidence before you?
Q6.9 If you answered (a) to the previous question, is this a significant problem for the quality of fact-finding in your court?

Accepting that there is some overlap in their treatment, especially in the answers included with Questions 2.2 and 2.3, many of the questions treat the issues — prefigured as problems — in isolation.[17]

Because of the way in which these (and other) questions have been posed they are practically incommensurable. We have few means of comparing their relative seriousness. There is no discussion of the relationship between categories or how conflicts between experts, or diversity of opinion or complexity might influence perceptions of bias. Is, for example, bias a more serious problem than complexity? What, if any, are the relations between bias, complexity and the clarity of presentation (Questions 2.2, 2.3, 2.4, 2.11 and 6.8)? More fundamentally, how are judges who experience comprehension or communication difficulties able to ascertain bias?

Contrasting claims about bias from Question 2.3 (reproduced in 3.B.i, below) with other questions and findings in the Report introduces a series of additional analytical limitations. Two examples will serve as illustrations. First, even though the judges seem to have selected 'bias' as 'the most serious problem', bias appears to have been considered less significant in evaluating the persuasiveness of oral expert evidence than the 'clarity of explanation'.[18] Where respondents were asked (Question 2.2(ii)) if they had ever encountered experts using oral or written language that was 'difficult to understand' the vast majority indicated that they had either 'occasionally' (73%) or 'often' (14%).[19] When it came to providing an indication of the factor which was most persuasive when an expert presents oral evidence, half (50%) of the judges chose 'clarity of explanation'. Less than one quarter (23%) of the judges, responding to the same question, selected 'impartiality' (Question 2.11).[20] So, while bias may be perceived as a kind of problem, to some extent it may be anticipated and its effects managed — perhaps informally by judges.[21] It is also possible that some of the issues presented as problems are, notwithstanding the nomenclature (imposed by the survey instrument), not particularly significant to judges in their practice.[22]

Such an interpretation might help to explain the identification of 'bias' as the 'most serious single problem' in Question 2.3 when contrasted with the results of Questions 6.8 and 6.9. In these questions respondents were asked if they had 'encountered partisanship in expert witnesses called to give evidence'. The vast majority of respondents (85%) indicated that they had. Interestingly, in a leading follow-up question, only two fifths (40%) listed such partisanship as a 'significant problem' (Questions 6.8 and 6.9).[23] Just to restate this result, a majority of judges did not consider partisanship to be a significant problem for fact-finding.[24]

In the face of these apparent inconsistencies the authors of Perspectives have generally preferred a compartmentalised interpretation of their data. The Report emphasises particular questions and results in isolation. It makes few attempts to address ambiguities or apparent contradictions. The absence of a systematic attempt to integrate the findings through the provision of more standardised answers or terminology leaves us with a motley assortment of potentially ambiguous, and at times contradictory, results. The results are not unequivocal in their support for the existence of serious and widespread problems with expert evidence. The seriousness of bias and partisanship seem to vary across the survey. The results and analysis of questions 2.2 and 2.3 are not easily reconciled with the data from Questions 2.11, 6.8 and 6.9. The failure to distinguish between different types of bias (advertent or inadvertent), or clearly define bias, partisanship and objectivity only serve to increase the ambiguity, impeding the ability to determine the existence or relative seriousness of any particular problem.[25]

There are, however, other difficulties with claims about bias.

B Is 'bias' a serious problem?

This subsection examines some of the assumptions underpinning the empirical identification of bias as a problem. In particular, whether the routine exposure to 'biased' expertise presents a problem, let alone a serious problem, and if the survey results necessarily lead to the types of conclusions drawn by the authors.

i Categorising 'bias'

Deceptively simple, Question 2.3 inquired:

Q2.3 What is the single most serious problem you have encountered with expert evidence?[26]



Frequency
Percent
(a)
bias on the part of the expert
85
34.84
(b)
use of oral or written language by the expert that was difficult to understand
24
9.84
(c)
failure by the expert to stay within the parameters of his or her expertise
14
5.74
(d)
non-responsiveness by the expert to questions
12
4.92
(e)
failure to prove the bases of the expert's opinions
34
13.93
(f)
failure by the advocate to pose examination-in-chief questions appro-priately
34
13.93
(g)
failure by the advocate to cross-examine so as to make the expert accountable
26
10.66
(h)
other
15
6.15

Total
244
100.00

Slightly more than one third (35%) of respondents identified 'bias on the part of the expert' as the 'most serious problem' with expert evidence. The investigators use these results as evidence for (their concern with) the prevalence and impact of bias — the issue 'most troubling' to Australian judges. But do responses to the survey support this assertion?

What is presented as, and initially appears to be, a sizeable proportion of judges alarmed by the incidence of bias can also be understood as an artifact of classification (or survey design).[27] This can be illustrated quite simply. Several of the answers included within Question 2.3 are similar or closely related. Instead of the answers supplied to the judges the authors might have offered alternatives. Depending on how they were configured and interpreted, alternative answers might have diminished the relative seriousness of bias. For example, rather than treat a series of quite narrow categories discretely, the authors might have combined these categories under a more generic description. So, among the possible answers presented in the survey, the authors might have combined '(e) failure to prove the bases of the expert's opinions', '(f) failure by the advocate to pose examination-in-chief questions appropriately', '(g) failure by the advocate to cross-examine so as to make the expert accountable', and perhaps even '(c) failure by the expert to stay within the parameters of expertise' under a more general rubric such as 'failure of advocacy'.[28] If we were to subsume these sub-categories within the category 'failure of advocacy' then this new single category would—on the basis of the responses received—represent the most serious problem with expert evidence (39%, and 44% if we incorporated (c)). The results of this alternative Question 2.3 might be presented as follows:

Hypothetical Q2.3A
What is the single most serious problem you have encountered with expert evidence?

per cent
(a) bias on the part of the expert
34.84

(y) failure of advocacy (e + f + g)
38.52 (ie, 13.93 + 13.93 + 10.66)
(z) failure of expert communication (b + c + d)
20.50 (ie, 9.84 + 5.74 + 4.92)

Consequently, the same data rearranged according to an alternative classificatory scheme might be used to suggest that 'failure of advocacy' is the most serious problem with expert evidence. Further, when added to perceived problems with communication, these issues (y + z, from Hypothetical Q2.3A) now account for about three fifths (59%) of the most serious problems with expert evidence.

Continuing our assessment; unlike the other answers in Question 2.3, 'bias' stands as a unified category. Yet, there appear to be few compelling reasons to treat 'bias' indivisibly. Restricting ourselves to categories recognised in the Report it would be possible to divide 'bias' into: deliberate bias or partisanship; inadvertent bias; bias that was difficult to ascertain, and perhaps even 'expert lacking credibility'.[29] Cognisant of these subdivisions, had 'intentional bias' and 'inadvertent bias' been incorporated into the survey we might have found that deliberate bias generated an even smaller volume of respondent disquiet. This might be represented as follows:

Hypothetical Q2.3B
What is the single most serious problem you have encountered with expert evidence?

per cent
(a.i) deliberate bias on the part of the expert
a.i ≤ 34.84
(a.ii) inadvertent bias on the part of the expert
34.84-a.i

(y) failure of advocacy (e + f + g)
38.52
(z) failure of expert communication (b + c + d)
20.50

Significantly, the division of bias along an advertent/inadvertent axis might have raised serious reform implications. The inclusion of such a division (particularly recognition of (a.ii) may have transformed some of the difficulties attributed to bias into an incorrigible feature of expertise, rendering the need for, or possibility of, reform uncertain.

These are not the only difficulties with Question 2.3. We are left to wonder about categories which were not included among the contenders for the single most serious problem. Why, for example, were judicial comprehension, the availability of experts, access to courts, impressions of jury competence, issues of delay and cost, impacts of recent reforms to tort law, or the effects of case management omitted?[30] Can we be confident that, had these options been included, bias would have retained its (empirically-mandated) status as the most serious contemporary problem with expert evidence?

ii 'Bias' spotting

Perhaps the most revealing dimension in the treatment of bias is the fact that judges were actually asked about it. The authors appear to believe that bias is a stable, tangible, observable quality and that their questions will produce consensus around its meaning and distribution.[31] We can be confident that these assumptions are intended because the Report does not treat the judicial responses ironically. That is, it makes no inquiry into what bias is or how it might be that judges consider themselves capable of ascertaining whether an expert is biased. Moreover, the survey makes no attempt to ascertain whether judicial images of bias are relevant to the practice of (different kinds of) experts or the reliability of evidence. In consequence, judicial responses are taken as an adequate indication of incidents of some entity known as 'bias'.

These theoretical assumptions and their methodological implications become self-defeating — especially in relation to the reform agenda. For if judges are able to identify incidents of bias and we can rely upon their observations, then why should we (or they) regard bias as a serious problem?[32] This leads to something of a paradox. If judges can identify bias then presumably they can deal with it. Alternatively, if they are unable to identify bias, or experience difficulty ascertaining it, then on what grounds can we rely on the judicial responses (to various questions in the survey)?

iii Judicial recourse to the rules of evidence

Another conspicuous difficulty with the emphasis on 'bias' — as a serious problem requiring intervention — is the quite limited judicial recourse to the mechanism of evidence law.[33] Unfortunately for the authors, this data would seem to destabilise claims about widespread practical difficulties posed by biased experts and/or discount the possibility of improvement through idealistic evidentiary and procedural reforms.

From the assembled data, judges would appear to have applied rules intended to guide, restrict and manage the admission and use of expert opinion evidence infrequently (Questions 7.2 and 7.3).[34] The survey results suggest that less than one third of the judges have excluded expert evidence more than five times on the basis of any rule recognised by the common law or codified in the Uniform Evidence Acts.[35] Consequently, we are left to speculate about whether, in their practice — as opposed to their responses to a multiple-choice questionnaire — judges conceive of bias as a serious problem. No attempt is made to reconcile responses which suggest infrequent recourse to the relevant rules of evidence with the analysis of Questions 2.2 and 2.3.

At this juncture it is helpful to reproduce Figures 15 and 17 from the Report. Figure 15 provides an indication of the limited recourse to evidence law, in the face of (the apparently) increasing use of expert evidence.[36]

2005_400.jpg

Figure 15: Judges' Patterns of Exclusion of Expert Evidence (Q7.2) (AJP 86)

Figure 17 provides some indication of judicial reaction to questions about the abolition of common law exclusionary rules.

2005_401.jpg

Figure 17: Judges' Views on Abolition of Common Law Exclusionary Rules of Evidence (AJP 88)

Notwithstanding limited use of the rules (in this context, at least) the judges are overwhelmingly opposed to law reform. Judicial antagonism to reform appears to extend even to those rules — like the ultimate issue rule — which, as the authors note, have received trenchant criticism for generations.[37] The data pertaining to the use of the Uniform Evidence Acts is analogous. Figure 17 can, and probably should be, interpreted in a way that is not consistent with widespread judicial commitment to incremental law reform, let alone fundamental changes to the adversarial system. The authors leave us to wonder why, if expert evidence apparently raises such serious problems, judges seem to be opposed to adjectival reform and do not seem to utilise the available rules to discipline biased expertise.

iv Judicial appointment of experts

Just as the rules pertaining to expert evidence seem to be infrequently invoked, few Australian judges appoint experts (Question 9.2).[38] This trend is consistent with practices in other adversarial jurisdictions.[39] If biased expert evidence raises serious problems for judges, and non-partisan experts were readily ascertainable, we might have anticipated a more active exclusionary regime and greater use of court-appointed experts, assessors and referees.

It is possible that in addition to their commitment to the adversarial tradition, common law judges actually recognise some of the potential difficulties inherent in the identification of impartial experts and the need to assiduously manage such appearances through the course of a trial and appeal(s).

v How serious is the problem?

One of the possible interpretations of the data, especially the answers from Question 2.3 when considered in conjunction with other parts of the Report, is that even though 'bias' is consistently presented as the single most serious problem with expert evidence it may not be sufficiently serious to warrant concern or intervention. From this perspective 'bias' might be the proverbial molehill. It does not follow that 'the single most serious problem' in a finite series of problems is in actuality a threat to legal order. Perspectives transforms the most serious among a limited range of 'problems' into a serious legal problem.

vi Professional rhetoric and self-identity

Finally, in their judicially-predicated investigation of problems with expert evidence the authors do not reflect upon whether the judicial responses might be conditioned by professional ideologies and institutional commitments.[40] This oversight may be a consequence of the exclusive focus upon the judiciary.

The survey gives little depth or sociological insight into reasons why social groups might classify something as a problem or why different groups might classify different things as problematic.[41] Consequently, even if claims about the culture of partiality and the existence of partisan experts were not empirically justified, judicial recourse to the partisan expert (or 'junk science') might nevertheless be explicable. The Report makes no attempt to consider any advantages conferred through the maintenance of concerns about expert performance, especially the prevalence of bias, in contexts where judges regularly rely on expert evidence to rationalise their decision making. The fact that judges are routinely expected to make decisions, often involving the assessment of complex evidence, might orient their perceptions and performances in ways that tend to emphasise specific types of problems when publicly accounting for their practices.

In some circumstances, especially where judges have to decide and rationalise choices between competing expert opinions, they are in a position to benefit from images of complexity, expert disagreement and bias as well as more conventional images of mainstream science and methodological propriety.[42] In reversing decisions, especially when accounting for miscarriages of justice, the transfer of blame onto biased experts provides a particularly valuable means of maintaining legal institutional legitimacy. By shifting agency or responsibility for their decisions judges can attribute some responsibility to 'incompetent' or 'biased' experts rather than legal rules and institutions or the capabilities of fact-finders (whether judge or jury).[43] These representations need not be considered disingenuous. Complexity, communication difficulties and perceptions of bias would seem to form part of the judge's lived experience. In phenomenological terms, they form part of their lifeworld.[44]

Recognising these possibilities may help to explain the apparent judicial willingness to confirm the existence of bias. It might also shed light on why the majority of judges, apparently wedded to the existence of a range of serious problems, are not enthusiastic proponents of reform.[45]

4 ADDITIONAL METHODOLOGICAL PROBLEMS

Moving from the focus on bias, this section explores a range of more broadly based methodological problems with the survey design and presentation of results.

A Reification: privileging judicial perspectives

Judges are in a unique position to contribute an informed perspective on the way in which expert witnesses function within the adversary system.[46]

Perhaps the most fundamental weakness, of a general methodological order, relates to the privileging of judicial perspectives along with their conversion into an accurate account of legal practice. While it may be interesting to know what judges think about a range of issues, it is not appropriate, politically or methodologically, to simply substitute judicial opinions for reality.[47]

While the Report recognises the existence, even importance, of 'other legitimate perspectives', they are discounted against the 'particularly valuable experience' of judges.[48]

This is not to argue that other legitimate perspectives do not exist in relation to the role of experts in the courts but to acknowledge that judges have a particularly valuable experience on a day-to-day basis in dealing with the challenges posed by the presentation of expert evidence in their courtrooms.[49]

Apart from the opinions of magistrates, the authors make no attempt to ascertain these other 'legitimate' perspectives.[50]

Two important issues emerge from this orientation. First, judicial expertise is privileged and trusted. Judicial experience is transformed into expertise which demands deference. None of the checks and balances which the authors would impose upon non-judicial forms of expertise are invoked against judges. Typically, and inconsistently with the model(s) of expertise used elsewhere in the Report, there is no reflection about whether judges — as a class or profession — might be 'biased' or whether professional interest in the outcome of empirical description or law reform might be explained by recourse to 'judicial culture' (see 3.B.vi). Second, having identified the existence of alternative voices and perspectives there is no attempt to investigate or contrast them with judicial opinions. The exclusive focus on judges is inoculated.[51] The authors acknowledge a weakness in their methodology and proceed by ignoring it. Having acknowledged the legitimacy of other opinions, it is only the opinions of judges (and magistrates) that will count. Methodologically, this represents an elitist, and perhaps cynical, approach to social inquiry and law reform.[52]

B How (not) to read Perspectives

Page one of Perspectives introduces the reader to the survey and its objectives. Initially, the survey is presented in a methodologically modest guise. As a survey of judicial attitudes it purports to provide 'a first and very important opportunity to understand what it is that the Australian judiciary thinks'.[53] At the bottom of the first page, however, the interest in judicial attitudes is subtly expanded. Judicial attitudes are insinuated as an important resource for law reform.

Since this is the first time that all of the Australian judiciary has been surveyed on any issue, there is added importance in the data elicited. In any future assessment of proposals to make changes to our litigation system, and to the admission of expert evidence in particular, the databank from this survey will be an important reference point in ascertaining judicial views.[54]

The reification of judicial attitudes — along with a willingness to rely exclusively upon them to support law reform — is conspicuous throughout the survey. Qualifications which problematise the (exclusive) interest in judicial opinions are practically ignored.

Two examples illustrate the authors' commitment to methodologically questionable uses of their data. Both are drawn from the 'Summary of Key Findings and Outcomes'. In the first example the authors explain: 'there are some findings which warrant response from litigation reformers. The purpose of this summary is to highlight those findings and to draw attention to important ramifications of the answers provided to the survey'.[55] At this stage we have moved from a survey designed to 'understand what it is that the Australian judiciary thinks' to one where the survey stands for a state of affairs which deserves a response. These tendencies become even more conspicuous and more urgent in the final paragraph of the first Part: 'The judges' responses to the questions posed in the survey require litigation reformers to customise their proposals for change to address the areas identified by the Australian judiciary as actually problematic in practice'.[56] On this occasion, the judicial responses 'require' reforms in areas 'they' have identified as problematic. In just 13 pages the purpose of the survey has expanded from a humble inquiry into judicial attitudes to a reliable dataset which demands an immediate response.

The data and analysis in Perspectives are presented as a reliable account of legal practice. Even apparently compelling data, however, could not redeem the exclusive focus on the judiciary. One profession's perspectives, especially when mediated through a multiple-choice questionnaire, do not provide an adequate description of reality nor an appropriate basis for substantial reform to legal practice.

C Inappropriate questions: why ask?

There are many questions in the survey which are, methodologically, quite peculiar. What, we might wonder, is the value of asking the following questions of judges?[57]

Q3.9 Have you heard cases where you have formed the view that a key expert has been retained by one side just to make the expert less available as a witness for the other side?
Q5.1 In the cases over which you have presided which area of expertise do you think has presented the most difficulty for jurors to comprehend?
Q5.3 Do you think that the jurors have comprehended the expert evidence before your summing up? (and Question 5.4)
Q5.10 From the following list please circle the three factors which you consider to be the most persuasive for jurors when an expert is giving oral evidence?
Q5.11 From the following list please circle the one which you consider to be the single most persuasive for jurors when an expert is giving oral evidence?
Q6.5 Are most experts who give evidence before you representative of the views of their discipline? (and Question 6.6)

These questions, which may possess some value as an indication of judicial conceits (in the sense of beliefs or ideas), provide very limited insight into other dimensions of the legal system. Interest in judicial impressions might have been defensible if the Report had not proceeded to reify judicial attitudes, that is, if judicial perspectives had not been presented as an empirically adequate description of reality and an appropriate basis for law reform. However, both the range of questions and the uses to which the findings are put do not suggest an abstract interest in judicial attitudes.

While judicial opinions about these subjects might be interesting, there are few methodologically defensible grounds for simply privileging or relying upon them. The Report does not treat the responses as the contingent speculation of one particular group of participants in the litigation process. Instead, judicial opinions, even opinions on topics where there is no reason to believe that judges are particularly well informed, are construed as reliable evidence.

These tendencies are perhaps most conspicuous where judicial attitudes are juxtaposed to judicial impressions of jury attitudes.

D Mediating judges and displacing jurors

Judges have a unique perspective of the travails of jurors …[58]

Having just considered the appropriateness and utility of asking questions about subjects removed from the experience or compass of judging, the issue re-emerges explicitly in Part 7 of the Report, 'Juror Problems with Expert Evidence'. Some of the methodological difficulties are apparent in the first sentences of that section:

Judges were asked about their views of the areas that have presented the most difficulty for jurors to comprehend (Question 5.1). Not surprisingly, the most common answer from respondents was that they "did not know" (28.99%, n=49) by reason of the limited feedback which they received from jurors.[59]

Elsewhere, the tone of the Report is less reflective and less qualified. It relies on the fact that judges: 'are therefore in a good position to form an opinion about the aspects of witnesses' evidence that make them particularly persuasive'.[60] On these occasions the responses are used in a way that goes beyond merely gathering information about judicial impressions and opinions. In their analysis of judicial responses to questions about 'jury problems', the authors treat the data as if judicial perspectives accurately reflected actual juror perspectives. The opinions of one group, the judges, are reified and substituted for the opinions of another. Rather than treat the results as indicative of judicial identity and professional differentiation (from imaginary jurors), the results are instead used as if they provide reliable information about the experiences of jurors.[61]

The disregard for the views and experiences of jurors (and other participants) is apparent in the treatment of the data.[62] But the methodological frailties become most palpable through the comparative approach, particularly in the use of numbers and diagrams:

2005_402.jpg

Figure 14: Comparison of Judges' Opinions of Single Most Persuasive Factor when an Expert gives Oral Evidence (Q2.11 & Q5.11) (AJP 72)

Figure 14 reveals almost nothing about the jury. It does, perhaps, provide some limited insight into areas where judges believe they outperform juries in evaluating certain types of evidence.[63] While this was, patently, not the reason for the inclusion of the questions on the jury, the data could be read to suggest that judges believe they are better fact-finders than jurors (Question 5.2).[64] Figure 14 (above), for example, might be interpreted to suggest that judges believe that jurors require clearer explanations, undervalue impartiality and experience and, quite inappropriately, are more inclined to value the expert's appearance. Within the terms of the prefigured questions and implicit model(s) of expertise, judges present themselves as the more rational decision makers (see 3.B.vi).[65] This may constitute the only methodologically defensible use of these results.

The investigators' interest in substantive attitudes prevents assessment of the apparent willingness of judges to speculate about issues upon which they would seem to have limited knowledge. The willingness of respondents to answer questions speculatively introduces an additional level of complexity into the interpretation of the survey. It enables the judicial responses to be treated ironically — as a test and possibly an indictment of judicial reasoning abilities.[66] Widespread preparedness to answer several of the questions, apparently without any reliable foundation, might be considered disconcerting.

E The 'missing' questions?

One of the conspicuous features of the survey is that many questions which might have afforded clearer (or more direct) indications of judicial impressions were not asked. Such questions, examples of which are set out below, might have operated as foils to particular interpretations. Rather than ask direct questions, several of the subjects examined in Perspectives are examined indirectly. This leads the authors to draw methodologically tenuous conclusions inferentially. In making this point it is only fair to acknowledge that there are limits to the number and variety of questions that can be included in any survey.

The earlier discussion of bias provides a useful example of a train of inquiry that was not pursued. Asking a judge which is the most serious in a list of problems or whether they have encountered partisanship is not the same as asking directly: Is bias a serious problem in contemporary litigation? Other questions which might have been incorporated — whether as yes/no, multiple-choice or open-ended — include versions of the following:

Should we reform the rules guiding the admission and assessment of expert evidence?
Do the vast majority of experts perform adequately?
Will the Declaration improve the standard of expertise entering courts?
Should we abolish lay juries?
Do unbiased experts exist?
How do you identify bias?
Why are so few experts subject to judicial censure, or charged with contempt or perjury?
Do inquisitorial systems manage expert evidence better than adversarial systems?
Should we adopt more inquisitorial procedures to manage expert evidence?
Is judicial comprehension of expert evidence a serious problem?
Should judges receive technical training?
Are judicial opinions an appropriate basis for law reform?
Should those who are not judges be consulted in relation to law reform?

While such questions could not have resolved these issues, each might have rendered particular interpretations of the data more difficult (or more compelling). For example, had judges expressed doubts about the value of the expert Declaration then claims about their desire for reform or belief in the value of (a particular) reform might have been weakened. Similarly, if judges were simply asked whether bias was a serious problem and whether it could be remedied procedurally, the answers may have structurally foreclosed several grounds of interpretation and reform. Instead, the authors and readers are left to infer and insinuate. The reluctance to ask more disruptive and — based on the authors' methods — potentially definitive questions suggests a degree of methodological timidity.

F Non-responses: why answer?

One of the most striking features of the dataset is the very large number of non-responses and 'I do not know' answers. Accepting that such responses are difficult to interpret, they are suggestive of respondent discord or inadequacies with the design of the survey instrument.[67] Of the roughly 100 questions (including sub-questions), 17 received between 25–50 per cent 'no response'/'I do not know'/'no opinion'/'missing responses' and 'it is not possible to say' answers and 16 received between 50–100 per cent 'no response'/'I do not know'/'no opinion'/'missing responses' and 'it is not possible to say' answers. Overall, a sizeable proportion of judicial respondents opted not to answer one third of the questions in the survey.

While the authors make few attempts to explain these results, they are occasionally acknowledged:

Not surprisingly, the most common answer from the respondents was that they "did not know" (28.99%, n=49) by reason of the limited feedback which they received from jurors.
However, the areas assessed as most problematic for jurors were most interesting …[68]

Here, methodological difficulties and judicial reticence are disclosed. However, having conceded these difficulties any implications raised by the non-responses tend to be disregarded in the ensuing analysis. For example, in the extract above, the apparent lack of surprise in response to judicial ignorance renders the original question somewhat curious. Nevertheless, the analysis continues relying on what might be considered the 'more surprising' answers.

5 INTERPRETING THE RESULTS

Having examined a range of assumptions which shaped the construction and reception of the survey instrument, this section explores some of the inferential and analytical processes at work in the interpretation of the responses. These examples, while not entirely representative, do illustrate the compartmentalised treatment of different subjects and suggest that pre-commitments and assumptions appear to occasionally overshadow the actual judicial responses.

Example A: A Pilot Study of Medical Referees in Complex Litigation

The first example is taken from the second page of the Report.

[T]he fact that it is now apparent that many judges are so troubled about the quality of medical, accounting, scientific and engineering evidence that they are prepared to give serious consideration to such aids to expert evidence assessment as the appointment of referees and assessors has many ramifications. One amongst many might be the appropriateness of conducting a pilot study into the utility of using medical referees in complex medical negligence cases.[69]

The passage features a discernible emphasis on judges being 'troubled' by the quality of certain types of expertise. We are informed that among the possible responses is a pilot study assessing the use of referees in complex medical cases. Yet the survey featured no questions on the utility of using referees in complex medical negligence cases.[70] When we examine the actual judicial responses to a question about medical evidence the answers are not consistent with their presentation in the extract above. In Question 2.5 — which is concerned with the ability of judges to evaluate complex evidence — only a small minority of judges (10%) indicated that they had ever encountered difficulty evaluating medical evidence.[71] When asked about the most difficult types of evidence to evaluate (Question 2.6) only a tiny proportion (2%) of judges thought medical evidence the most difficult.[72] In their responses to Question 5.1 judges attributed few problems to jury comprehension of medical evidence (only 3%).[73] However, in this context the most controversial data were produced by Question 5.7. Only 10 per cent (10%) of judges expressed a preference for removing cases with complex and conflicting medical opinions from juries.[74] Together, these responses would seem to indicate that judges and juries (at least from the perspective of 'the jury' manifested in the Report) do not perceive the evaluation of medical evidence as especially difficult.

Further, when we examine the actual commentary in the Report, the assessment of Question 2.5 appears to be inconsistent with comments taken from the executive summary. For example:

Psychiatry, psychology and medicine/surgery only figured in a handful of expressions of concern. Given the numbers of worries expressed about doctors with different plaintiff- or defence-oriented views, this was a surprising result, suggesting that though witness bias in the medical area is regarded by judges as relatively common, it is manageable.[75]

This passage sits very awkwardly against the earlier claims about bias and the assertion that problems with medical evidence might warrant a pilot study into the use of medical referees. Indeed, the authors would appear to have inverted their results. Significantly, the more alarmist claims about medical expertise — those drawn from the first extract — are taken from a section of the Summary entitled: 'Need for Procedural Change'.

Example B: Disquiet at the 'Guiding Hand' of the Lawyer

'Let there be more efficiency and less theatre' was clearly the wish of many judges. Likewise there was ample recognition that expert reports, like affidavits, owe much to the guiding hand of the commissioning party's lawyer.[76]

And,

[t]he picture painted by a significant cross-section of respondents was one of worry about an unacceptable culture in sectors of disciplines providing report-writers and witnesses to the courts. The culture, they asserted, does not adequately value and put into practice independence, objectivity or transparency of opinions.[77]

Again, the contention that the 'guiding hand' of a lawyer constitutes a problem is not consistent with the data. The first issue, discussed previously, is that the investigators have asked respondents about activities remote from their judicial practice. The second difficulty is that the judicial responses do not support the authors' anxieties. Consider the following question and answers directed at the co-production of expert reports:

Q2.8 In the expert reports that are tended to you, does it appear that lawyers have played a part in settling the content (for example, as commonly happens in respect of lay affidavits)?[78]


freq
per cent
%-NR
(a)
never – please go to 2.10
61
25.00
25.85
(b)
occasionally
126
51.64
53.39
(c)
often
42
17.21
17.80
(d)
always
7
2.87
2.97

No response
8
3.28


Total
244
100.00
100.00

By themselves these figures would seem to suggest that a slight majority of judges think that lawyers are occasionally involved in settling the contents of expert reports. Without more, this might be represented as, and considered to be, a problem. Though to do so would require making prejudicial assumptions about the performances of experts and lawyers. However, when the responses to Question 2.8 are read in conjunction with the answers to the following question, concerns about lawyer intervention appear to be substantially allayed. Indeed, these results could be read as a judicial endorsement of the lawyer's 'guiding hand'.

Q2.9 If you answered the previous question (b), (c), or (d), what is the usual effect that this participation by the lawyers has upon your assessment of the weight to be given to the expert's evidence?[79]


freq
per cent
%-NR
(a)
it helps
72
29.51
40.22
(b)
it hinders
45
18.44
25.14
(c)
it makes no difference
62
25.41
34.64

No response
65
26.64


Total
244
100.00
100.00

Less than one fifth (18%) of respondents indicated that lawyer participation in settling expert reports 'hinders' their assessment of the evidence. The survey results would seem to suggest that many judges do not share the authors' concerns. They would also appear to present the outlines of more complex judicial impressions of lawyer–expert relations which might compromise the analytical value of the 'culture of partisanship'.

Example C: Asymmetry in the Treatment of Judges and (Other) Experts

Perspectives discloses a latent tendency to treat the expertise attributed to judges differently from the manner in which non-judicial forms of expertise are treated. We have already seen (in Subsections 4.A, 4.B and 4.D) how judicial opinions are treated as unbiased, implicitly reliable and their analytical capabilities presented as superior to those of lay juries. Now we will consider an instance where apparent judicial limitations are neutralised.

In the following extract we can observe how, on this occasion, the responses of judges are excused on the basis of evidentiary complexity. The excuse is mobilised through the strategic use of more sociologically sensitive images of expertise:

All too often in the past the acceptance that there is no one answer has not been acknowledged sufficiently in the law's positivist search for definitive answers. The judges' response to this issue may be seen as demonstrating a consciousness of both the diversity of approaches and views in relation to many areas of expertise, and a cognizance that a number of fields of expert endeavour, when examined in detail, are significantly complicated. There is little that procedural reform or improved training can do to address this reality.[80]

Issues of complexity are recognised and discussed throughout the survey. However, when dealing with experts, complexity is rarely used to mediate (or excuse) their performances, or to suggest that impressions of bias might be misunderstandings or an incorrigible feature of expert knowledge. Different epistemological assumptions are applied to judges. Judges benefit from more sociologically sophisticated descriptions of expertise which recognise difficulties produced by uncertainty and complexity.[81] Judges are not criticised because they encounter difficulties understanding and evaluating complex forms of evidence. Experts, in comparison, are assessed against less sympathetic positivist-oriented criteria such as correct methods and ideal images of practice which implicitly require impartiality and high degrees of certitude.

The example reinforces the limitations of a survey which privileges the perspectives of judges. We can only assume that, had they been asked, experts might have attributed some of the responsibility for (the perception of) problems to complexity, processes of simplification, time constraints, resource limitations, arcane legal procedures and the technical (in)competence of lawyers, judges and jurors.[82] Surely those not included in Perspectives would have presented sociologically thicker descriptions of their practices, commitments and difficulties.[83]

Significantly, where expertise is conceived as diverse and complex (and perhaps genuinely contested) the authors appear to acknowledge that there is limited scope for improvement through procedural reform or further training.[84] Could it be that these, arguably more sophisticated, approaches to expertise render many of the other claims and proposals redundant? These questions would seem to be especially apposite if: complexity and uncertainty have the potential to influence perceptions of bias; much of the expert disagreement associated with litigation is legitimate (or we have no a priori means of distinguishing the legitimate from the illegitimate); or, we reject the positivistic commitment to unrealistic images of science and expertise.[85]

6 LAW REFORM: RESPONDING TO LEGAL PROBLEMS

Inspired by insights and practices from socio-legal studies and the social sciences, the previous sections endeavoured to demonstrate how particular assumptions and representations in Perspectives contribute to the appearance of a range of problems with expert evidence in Australian courts. In effect, Sections 3, 4 and 5 provided an indication of how a public problem rhetoric can be generated or, more pertinent to the case of expert evidence, perpetuated using what appears to be a fairly innocuous survey instrument. The empirical warrant for claims about problems with expert evidence, particularly bias, was contested on the basis of theoretical and methodological limitations. This section continues the analysis by examining the connection between a range of policy proposals and the results of the survey.

Assertions about the need for reform and judicial support for reform pervade Perspectives. The following extracts, in conjunction with others cited in this section, exemplify the reformist orientation of the study and Report: 'In the meantime, there are some findings which warrant response from litigation reformers. The purpose of this summary is to highlight those findings and to draw attention to important ramifications of the answers provided to the survey'.[86] And,

there is one step which can be taken straightaway to enhance the quality of expert evidence in Australian court rooms. Among expert witnesses and within the judiciary in England, in particular, recent years have seen a recognition of the need to develop codes of ethics and practice for forensic experts which will consolidate an expectation and practice of objectivity.[87]

The need for the proposed reforms and the existence of widespread and serious problems with expert evidence are presented as if they emerged without mediation from dispassionate judicial respondents. Consider the attribution of agency in the following passage:

They [the respondent judges] are concerned to reduce what they identify as a culture of inadequate objectivity by many doctors, accountants, scientists and psychologists, to improve the performance of experts and advocates alike and to explore means of bringing information before the courts in a form which is both clear and amenable to sophisticated and cost-efficient assessment.[88]

On the basis of the survey design and the data presented some of these claims appear hyperbolic.[89]

A Proposals

This subsection examines the proposals for reform in a way that is sensitive to their relationship with the survey dataset. The reform proposals are noteworthy for two reasons: (i) they target experts rather than advocates or judges; and (ii) they are not consistent with the results of the survey. The overwhelming focus on reforming expert practice might be considered, at the very least, curious in a survey of judicial attitudes. The data would seem to indicate that judges were as concerned about the performance of advocates and expressed a slightly stronger preference for training advocates than experts. The data also suggest the importance attributed to the clarity of expert evidence. The primary focus on expert performance, particularly reforms designed to eradicate bias, would therefore seem to be motivated by factors beyond the dataset.

At this stage we consider the reform proposals with respect to the protagonists identified in the Report.

i Experts

One solution to such difficulties is to improve the training of experts.[90]

Most of the proposed reforms are directed toward the performance of experts. The centerpiece of the reforms, the expert Declaration, is the only proposal elaborated in any detail. The Declaration is set out in full and emboldened twice in the Report.[91] It appears first in the 'Summary of Key Findings' and again in 'The Future'. Presented as a partial solution to the serious problems attributed to expert evidence, the Declaration is intended to 'enhance the quality of expert evidence' — to eliminate, or substantially reduce, bias and change the culture around expert witnessing.[92] It aims to make experts (more) accountable. The authors propose, '[b]uilding upon the Federal Court Practice Note and recent initiative in England',[93] that experts should complete a version of the following:

I, ……., DECLARE THAT:
1.
I recognize that my overriding duty in writing reports and in giving evidence is to the Court/Tribunal, rather than to the party commissioning me and/or paying my fees.
2.
I have used my best endeavours to produce my report in sufficient time to enable proper consideration of it.
3.
I have made myself reasonably available for discussion of the contents of my report with professional representatives of all parties involved in the litigation.
4.
I have provided within my report
(a) details of my relevant qualifications;

(b) details of the literature and other significant material that I have used in arriving at my opinion;

(c) identification of any person, and their qualifications, who has carried out any data selection, data inspection, tests or experiments upon which I have relied in compiling my report; and

(d) details of any instructions (whether in writing or oral, original or supplementary) which have affected the scope of my report.
5.
I have used my best endeavours in my report, and will endeavour in any evidence which I am called to give,
(a) to confine myself to expressing opinions as an expert within those areas in which I am specially knowledgeable by reason of my skill, training or experience;
(b) to distinguish among the data upon which I have relied, the assumptions which I have made, the methods that I have employed, and the opinions at which I have arrived;
(c) to indicate those data, assumptions and methods upon which I have significantly relied to arrive at my opinions;
(d) to give succinct reasons for each of the opinions which I express;
(e) to be objective and unbiased;
(f) to make the opinions which I express clear, comprehensible and accessible to those not expert in my discipline;
(g) to be scrupulous in terms of accuracy and care in relation to the data upon which I rely, my choice of methods, and the opinions which I express arising from those data;
(h) to indicate whether I have been provided with all the data necessary for me to arrive at the views which I have expressed and whether I need further information;
(i) to indicate whether I have been apprised of any data or choice of method which might entail opinions which are inconsistent with the opinions which I have expressed; and
(j) to indicate whether I have been unable for any reason to employ the methodology which I would prefer to use before expressing an opinion.
6.
If I become aware of any error or any data which impact significantly upon the accuracy of my report, or the evidence that I give, prior to the legal dispute being finally resolved, I shall use my best endeavours to notify those who commissioned my report or called me to give evidence.
7.
I shall use my best endeavours in giving evidence to ensure that my opinions and the data upon which they are based are not misunderstood or misinterpreted by the Court/Tribunal.
8.
I have not entered into any arrangement which makes the fees to which I am entitled dependent upon the views I express or the outcomes of the case in which my report is used or in which I give evidence.

According to the Report:

such a declaration is eloquent in terms of the ideals expressed. In time, it is likely to forge a culture of obligation on the part of expert witnesses primarily to the courts, rather than to the parties paying their fees. Finally, the presence of such a declaration articulates values, departure from which is likely to lead to little weight being placed upon the defaulting expert's views.[94]

This highly idealised — or, to adopt the authors' terminology, 'positivist' — approach implies that a Declaration will change a partisan culture they associate with expert witnessing. It also implies that experts are generally inattentive to a range of existing ethical, legal and professional obligations. The Declaration's limitations will be explored in more detail below (see Section 7).

In the discussion of further education and training for experts, drawing from Lord Woolf's Access to Justice report,[95] the proposed instruction is oriented to the operation of the legal system; particularly the expert's duty to the court.[96] While a majority of judicial respondents (Question 4.1) thought further training of experts in their forensic function was 'desirable' (55%), only a small minority thought that it was 'necessary' (11%) or 'essential' (5%).[97] When asked (Question 4.3) about which areas might be most beneficial for training, the most frequent responses were in the disappointingly ambiguous realms of 'communication' (22%) and the 'expert's role' (16%).[98] Most interestingly, given the predilections of the investigators, training in objectivity (or impartiality), methodology, falsification and reliability were not included in the proposed syllabus.

Another area of potential reform, greater use of visual and information technologies, is treated perfunctorily.[99]

Overall, there is little reflection about the tractability of expert knowledge, public understanding, the implications of simplification, translation or the integration of expert knowledge into a legal case.

ii Advocates

Ostensibly, the Report appears to recommend further training for advocates. In practice, however, there is a tendency to downplay apparent judicial concern with the performance of advocates. In contrast to the response to expert performance, when dealing with the legal profession, the market, formal educational and self-regulation are the preferred means of stimulating change. There is nothing equivalent to the Declaration directed toward the practice of lawyers.

Interestingly, the question concerned with the perceived need for further advocate training is not the same as the question purporting to deal with experts. The range of answers provided (in Question 4.4) is slightly different from the answers supplied in a similar question about experts (Question 4.1). The category 'essential' is dropped, thereby reducing the number of positive answers available to respondents. Nevertheless, the vast majority of respondents thought that further training for advocates was either 'desirable' (63%) or 'necessary' (23%).[100] Higher than the responses to similar questions about experts, in the analysis these results are presented as a perceived need for additional training: 'A clearer message about the need for further advocacy training could not have been given by the Australian judiciary'.[101] And, 'carefully targeted training has much to commend it and is enthusiastically advocated by judges'.[102] When asked about the directions for further training most judges selected 'improved preparation skills' (47%) and 'improved skills in cross-examination' (25%). Slightly more than one in 10 judges selected 'enhanced knowledge of technical areas' (11%).[103] This last response might be considered intriguing given the authors' prior assertion that effective cross-examination requires a basic technical understanding.

In the limited space dedicated to possible reforms to advocacy the Report notes that advocates should appreciate that developing their skills will confer a competitive advantage. Additionally, there is a suggestion that advocates might join a cross-disciplinary society. 'What is necessary is a change to legal culture to recognise that cross-disciplinary knowledge will assist in the discharge of both solicitors' and barristers' functions'.[104] We can only assume that, at this stage, such cross-disciplinary societies are largely composed of expert witnesses! The market and the increase in relevant undergraduate and postgraduate courses are presented as the appropriate regulatory mechanisms. Even though the data would seem to suggest that judges are more concerned with the performance of advocates than the performance of experts, the Report dedicates far more attention to the reform of expert practice. In effect there are no substantial proposals, and no proposals that might be readily implemented for reform to advocacy or advocacy training.[105]

Significantly, the discussion of experts and advocates is distinguished by the role objectivity plays in relation to each. Adopting highly conventionalised images of the adversarial system, professionalism and law–science relations, the major concerns pertaining to the advocate revolve around ethical issues and case preparation. The focus on experts is directed to bias and communication. The characterisation of the lawyer as a professional advocate and the presentation of expert knowledge as potentially partisan prevent investigation of the lawyer's role and agency (or 'guiding hand') in the selection, strategic manipulation and representation of expertise.[106] The models of law and expertise adopted by the investigators structurally preclude certain lines of investigation and are prefigured to attribute most of the responsibility for any problems to experts.

iii Judges and juries

It might be considered surprising, in a survey investigating the attitudes of judges, that none of the proposals for reform and none of the assembled data are used to make substantial recommendations for changes to judicial practice. The approach, focused exclusively on judicial impressions, has largely predetermined the directions of reform. The authority of judicial opinions, filtered through the survey instrument, lends legitimacy to the reform of other parts of the system. In this way the survey acts as a form of empirical ventriloquism.[107] The imprimatur of judges is appropriated and used to promote a series of reforms — directed away from the judiciary.

These results may be the result of deference to judicial respondents or a failure to recognise some of the methodological limitations inherent in the survey. Having privileged the perspective of judges, the authors do not consider why judges might attribute responsibility for apparent problems to others. The authors take as self-evident the fact that experts and advocates are responsible for most of the difficulties with expertise. Consequently, there are no genuine proposals to change judicial practice. Proposals which are linked to judicial performance and the reform of evidence rules are flatly rejected by the vast majority of judges. In consequence, we have a survey of judicial respondents which attributes responsibility for problems with expertise to non-judges and resists the reform of judicial practice. The design of the survey and reforms — like the Declaration — is generally in the professional, institutional and managerial interests of judges. Depending on the models of expertise adopted, the Report might be understood as an exercise in victim-blaming and cost-shifting. Experts are blamed for alleged legal problems with their expertise and parties will bear the cost of any additional labour.

For as long as experts have been criticised as venal mountebanks and charlatans, there have been dissenting voices.[108] Experts have described their participation in legal settings in terms of exploitation, misunderstanding and abuse. Does the emphasis on the need to reform expert behaviour and change what is asserted as the 'culture of partisanship' merely perpetuate this longstanding and highly questionable dichotomy?

iv Parties and publics

There are few references to parties or members of the public in the Report, and no reflection about any broader legal or social implications flowing from the proposed reforms. There is little concern about whether citizens will find accessing courts more difficult or more expensive. More particularly, the cost or admissibility implications of the Declaration are not discussed.

B Evidence for reform

As we have seen, the purported need for law reform in the area of expert opinion evidence is a central and ubiquitous feature of Perspectives. According to the authors, problems associated with expert evidence require fundamental changes to the traditional Anglo-Australian adversarial system. Apparently, the need for reform is recognised by a substantial proportion of judicial respondents:

The judicial survey has shown that many judges are prepared to contemplate thoroughgoing reforms to trial procedure in both the civil and criminal areas in relation to expert evidence. Judges, and probably the general community, are no longer wedded to the traditional Anglo-Australian concept of the adversary system, in which litigation is entrusted to the hands of the parties and judges are expected to remain relatively uninvolved.[109]

And,

[t]he problems presented by expert evidence, amongst a number of contemporary problems within the civil and criminal justice systems, most particularly their cost and inaccessibility, have elicited from a substantial part of the Australian judiciary a preparedness to contemplate fundamental change to the structure of the litigation system.[110]

Having elicited the views and support of the Australian judiciary — a convenient and powerful substitute for the more limited number of judicial respondents — the authors hope to convert the judicial 'goodwill' into reform capital: 'The next step is to translate this goodwill into workable and cost-equitable procedures that will address the problems highlighted with the traditional means of adducing and evaluating party-generated expert evidence'.[111] These claims might have been more persuasive if they were supported by the data, if the authors had directly inquired about the perceived need for law reform and the alleged difficulties were incontrovertibly problems which could be improved by the particular reforms proposed.

i Does the survey provide evidence of judicial support for law reform?

There are few questions in the survey instrument which deal explicitly with the subject of law reform. A charitable interpretation of questions with obvious reform dimensions would encounter difficulty credibly extending beyond Questions 4.1, 4.4, 5.8, 6.2, 6.3, 6.10, 7.4, 9.5, 9.9 and 9.13. If we examine some of the more direct questions and those with the strongest support for the authors' claims then the fragility of the reform agenda will become more readily apparent.

The strongest support for possible reform arises in the context of further training for experts and advocates. In this context the majority of respondents indicated that 'training of experts in their forensic function' (55%) and 'training of advocates in calling and cross-examining experts' (63%) was desirable. About one sixth (16%) thought that such training was necessary for experts and about one quarter (23%) thought it necessary for advocates.[112] Significantly, the judges appeared to be more concerned with improving the performance of advocates. Though, as we have seen, this desire to have advocates trained is not reflected in the authors' substantial reform proposals. Rather, in line with their commitment to a particular public problem perspective, the authors focus on the need to rectify the performance of experts in terms of their objectivity.

Other potential evidence for judicial support for law reform concerns the use of court-appointed experts.

Q9.5 Are you of the view that more use of court-appointed experts would be helpful to the fact-finding process?[113]


freq
per cent
%-NR
(a)
yes
119
48.77
54.34
(b)
no
74
30.33
33.79
(c)
I do not have an opinion
26
10.66
11.87

No response
25
10.25


Total
244
100.00
100.00

Once again, what is presented as apparently cogent support appears, on inspection, more equivocal. Question 9.5 is, at best, indirectly concerned with law reform. Judges have been empowered to appoint experts for hundreds of years.[114] Recent reforms have tended to confirm or extend that power.[115] While almost 50 per cent of the respondents (49%) agreed with the proposition in Question 9.5, this cannot simply be equated with a preference for law reform or reconciled with a range of competing legal values, some of which are raised in Question 9.3. Agreeing with the proposition that the increased use of court-appointed experts might assist with fact-finding is not equivalent to supporting the use of more court-appointed experts or accepting that such experts are neutral or readily available. This seems to be supported by the low incidence of judges actually appointing experts unilaterally and, more pertinently, by the authors' own assessment of such appointments: 'Australian judges' support in principle for further use of court-appointed experts raises a number of practical challenges if it is to be implemented'.[116] The reform-oriented interpretation of Question 9.5 seems to acknowledge apparent inconsistencies between judicial authority, in-principle support and practical challenges. The authors have subtly transformed a question about the perceived value of court-appointed experts for fact-finding — in the abstract — into a question which purports to ask about mechanisms for improving the use of court-appointed experts or changing expert practice.

Perhaps the best evidence against any widespread judicial enthusiasm for law reform, even reform of a less than fundamental kind, is the apparent reluctance to support changes to evidence law. Despite the authors' insistence, there is no evidence of a judicial groundswell in favour of reform. When asked about abolishing various rules associated with the provision of expert evidence (Question 7.4, see Figure 17 above) the highest percentage in favour, by a factor of three, did not even reach one quarter (23%) of respondents.[117] On the few occasions when the respondents were directly asked, as in Question 7.4, the judges appeared to be reactionary in their opposition to potential change.

The absence of questions linked directly to law reform and the ambivalent judicial responses do not prevent the authors persisting with their two main themes — the threat posed by bias and the need for fundamental reform — both purportedly derived from the perspectives of judges: 'Many judges who responded to the survey identified partisanship or bias on the part of expert witnesses as an issue about which they were concerned and in respect of which they thought that there needed to be change'.[118]

In addition, the lack of inquiry about cost implications or the practicalities of reforms do not prevent a range of assertions in the 'Summary of Key Findings' and 'The Future': 'The judicial survey demonstrates a readiness on the part of many Australian judges to canvass practical and cost-neutral procedural changes which will address the challenges posed by complex and conflicting expert evidence'.[119] There are no questions in the survey which explicitly deal with costs, accessibility, case management or a range of practicalities associated with the use of experts.[120] Those questions which might have raised them even incidentally, such as Question 9.3 where several reasons for not using a court-appointed expert are canvassed, are inordinately narrow. Consequently, neither the source nor derivation of such assertions are apparent.

ii Are the reforms appropriately targeted?

Having questioned the methodological propriety of claims used to insinuate serious problems with expert evidence, as well as claims about the extent of judicial support for reform, it is nevertheless worth considering the possible impacts of the reform proposals. Disregarding, for the moment, the empirical support for the existence of serious problems associated with the provision of expert evidence, we will consider whether the proposed reforms would relieve the alleged problems.[121]

Will the Declaration change expert practice? Probably not. The Declaration is presented as innovative and eloquent. According to the authors it will eliminate the most egregious forms of bias and partisanship by requiring experts to sign a declaration about their work which reiterates their paramount duty to the court. However, on reflection such requirements are hardly innovative. Since the middle ages experts have owed a duty to the court and have taken an oath (or affirmation) to tell 'the truth'. Could experts really be under the impression that deliberate partisanship is acceptable? Rather, if deliberate partisanship is prevalent then its existence may be a consequence of the difficulties of appearing impartial and/or a widespread realisation that courts experience difficulty taking meaningful action in response to expert evidence.[122] Moreover, the Declaration says little about possible sanctions for derogation, or how such derogations might be ascertained and proved.[123] Do judges possess the technical abilities to distinguish between (what are presented as) wilful breaches as opposed to genuine adherence to idiosyncratic views?[124] If challenged, experts will presumably experience few difficulties providing bases for their opinions, however tenuous they might appear to others. There should also, at least according to prevailing folklore, be no shortage of experts willing to testify in defence of their peers.[125]

Perhaps more importantly, without changing the 'culture of experts', does the Declaration impose new types of assumptions and standards, such as the imposition of particular images of reliability or higher admissibility thresholds, via implicit images of science and expertise?[126] The authors appear to be committed to the idea of 'mainstream opinion'.[127] Does the Declaration represent a substantial departure from existing Australian admissibility jurisprudence? Does the proposal threaten to surreptitiously raise admissibility standards by promoting more exacting images of expertise?[128]

The proposed Declaration would appear to exert limited influence on inadvertent forms of bias.

Significantly, and curiously, occasionally the authors seem to recognise some of these limitations without tempering their enthusiasm for reform: 'However, practice directions and mandatory witness declarations can only go so far in addressing issues identified by the respondent judges as problematic'.[129]

How will the reforms change expert culture? Claims about the 'culture of partiality' are merely assertions. They are underpinned by the frightfully Manichean implication that experts are either biased or unbiased. Once we accept (along with the authors at various places in the Report) that experts frequently and legitimately disagree, then the culture of partiality becomes highly suspect as an analytical tool. This seems to be confirmed by the difficulties determining whether a particular expert is biased, whether intentionally or not. We can always allege that an expert is biased. Usually, such an allegation will involve attributing various forms of interest or alignment.[130] Proving that an expert is actually biased in a way that ought to change the status of their evidence may be harder.

Once again it would seem that there is no reason to believe that the completion of an additional form (the Declaration) will do more than reinforce an existing symbolic order. The chances that the Declaration will superimpose a new order of practice would seem extraordinarily low. Ironically, what it may facilitate, especially among judges, is more frequent use of the kind of simplistic positivist-inspired images of expertise underpinning the Declaration and recent reforms to procedure in Australian courts. We can probably expect more criticism of partisan experts and more alarm about the deleterious effects of 'junk science'.[131]

Will training in their forensic function improve expert performances? Equivocation in the Report provides some indication of potential limitations: 'The irony is that, on the one level, there is concern about the presentational skills of expert witnesses being inadequate, and, at another level, about the skills being of too high an order and resulting in the risk of jurors [rather than judges] being overborne by "witness performance"'.[132] It is possible that additional training and experience will make experts even more comfortable with litigation settings. For judges, like those on the US Supreme Court, who think that the proper place for the scientist is in the laboratory and not the courtroom, further training threatens to improperly enhance the forensic skills of the expert.[133] Despite the authors' confidence, it is difficult to anticipate the practical effects of their proposals. Training may have unanticipated effects. Experts are neither passive nor unimaginative. Training will not merely enhance their performance in the ways that judges (and commentators) might desire.[134]

7 PUTTING THE DECLARATION TO WORK

One illuminating way to assess the study and recommendations for law reform, including recent reforms in several State and federal courts, is to apply the proposed Declaration to Perspectives.[135] This application exposes limitations to appeals to protocols, normative frameworks and idealised images of method and expertise for legal practice.[136] It demonstrates how the Report might not even meet the standards which its authors would require of expert reports prepared for litigation contexts. This example, concerned with research not constrained by the exigencies of litigation, highlights how expert knowledge, even expert knowledge purporting to have no explicit preconceptions, can be effectively subverted. It also illustrates how the meaning of neutrality, objectivity, partisanship and bias can be strategically manipulated in ways that render some representations difficult to maintain. (The bold sections, below, are drawn from the expert Declaration)

5. I have used my best endeavours in my report, and will endeavour in any evidence which I am called to give,

(c) to indicate those data, assumptions and methods upon which I have significantly relied to arrive at my opinions;

The authors of Perspectives claim to possess 'no explicit preconceptions'.

While many surveys are conducted to test an hypothesis, this survey had no explicit preconceptions. Rather, it sought to gather information, to go beyond the occasional comment in a judgment, and empirically to ascertain judicial beliefs or approaches in relation to the way in which evidence from other disciplines fares, and should fare, in the course of contemporary litigation processes.[137]

If we were to accept the authors' claims in the extract above at face value then this section of the Declaration could be routinely circumvented. Avoiding serious methodological or philosophical considerations the authors claim to be merely reporting what is out there.[138] The reluctance to engage with methodological or 'epistemological discourse' on the grounds that the survey and Report are merely gathering and presenting information is misguided. That stance obscures assumptions and values embedded in the survey and, more explicitly, the selection of methods, the interpretation of the survey data and promotion of a reform agenda.[139]


(e) to be objective and unbiased

Here again, objectivity and bias assume a pivotal role. Have the authors been objective and unbiased? Assuming, for the moment, that the requisite standard of objectivity was obvious and attainable, then on the basis of the foregoing analysis we might suggest that the survey and analysis are insufficiently 'objective'. Not only do the authors claim to possess no preconceptions, the survey seems to be designed (whether inadvertently or not) to identify, reinforce and accentuate empirically tendentious problems with expert evidence. The findings are reported in a way that appears to support substantial modification to our adversarial systems. Unsustainable models of expertise and objectivity along with strained interpretations guide and compromise the entire enterprise.

(g) to be scrupulous in terms of accuracy and care in relation to the data upon which I rely, my choice of methods, and the opinions which I express arising from those data;

The authors' methodology has been criticised at length. If these criticisms were accepted it would be difficult to defend their choice of method, collection and interpretation of data as either scrupulous or accurate.[140] We only have to recall the questions about the jury (Subsection 4.D) to impugn the choice of methods and Section 5 on interpretation to challenge the accuracy, consistency or reliability of the data, interpretations and opinions.

(h) to indicate whether I have been provided with all the data necessary for me to arrive at the views which I have expressed and whether I need further information;

Initially, the authors describe the survey of judges and magistrates as merely the early phases of a larger project. Despite these claims, other perspectives, recognised by the authors as legitimate, are conspicuously absent from the Report. The preliminary findings, and the reforms attributed to them, lack the necessary qualifications. On the basis of the initial survey(s) the authors would seem to be insufficiently apprised to make firm conclusions. Even according to their own methodological precepts, in order to establish several of the claims they would be required to survey other protagonists such as experts, lawyers, jurors and parties.[141] Claims in Perspectives, however, suggest that the authors do not consider the absence of additional research as an important constraint upon their project as it now stands.

There is a further methodological issue (which also relates to 5(g)): whether data generated by a questionnaire could ever provide an adequate source for the kinds of interpretations, claims and reforms being proposed.[142] It may be that other kinds of research, such as ethnography, ethnomethodology and open-ended interviews would be required to augment or generate a more meaningful and reliable appreciation of the roles played by experts (and judges) in contemporary litigation. This type of research may require more sophisticated methodologies and theories of expertise.[143] It would also provide more direct access to the practices and performances of a range of individuals than relying on the mediated public repertoires of judges.

(i) to indicate whether I have been apprised of any data or choice of method which might entail opinions which are inconsistent with the opinions which I have expressed;

The authors ought to have defended their preferred methodology against some of the many alternatives. The absence of any discussion of alternative methodologies functions, misleadingly, to imply the situational adequacy of the survey. Several academic traditions are skeptical of questionnaire surveys.[144] The utility of questionnaires requires attention even before we consider the specific methods employed in this survey. Those disciplines in which questionnaires are routinely employed offer sophisticated approaches to investigation which seem to be absent from this enterprise. The Report does not explain the relative advantages (and disadvantages) of different research methods nor why the investigators selected their particular approach(es).

In addition, investigators undertaking a study of judicial attitudes toward expert evidence might have inquired into disciplines more centrally preoccupied with expertise in contemporary society. The perpetuation of myths about law and science and the absence of sustained attention to decades of research, including a not inconsiderable quantity of empirical research, in science and technology studies (STS), the sociology of scientific knowledge (SSK), the history and philosophy of science (HPS), and the social construction of technology (SCOT), is surprising and, according to other parts of the Declaration (5(e), 5(g), 5(h) and 5(i)), methodologically unsettling. The lack of references to literature in the public understanding of science (PUS) is remarkable.[145]


6. If I become aware of any error or any data which impact significantly upon the accuracy of my report, or the evidence that I give, prior to the legal dispute being finally resolved, I shall use my best endeavours to notify those who commissioned my report or called me to give evidence.

Although the Report was not developed for a particular litigation context, but was designed to provide information capable of facilitating law reform, there is no reason why a disclosure obligation, like the Declaration, could not also apply to expertise in this different legal context. Assuming that the investigators were to accept some of the criticisms which have been leveled at their research; would it be realistic to expect them to publicly qualify or retract any of their claims? Asking this sort of question illustrates that the Declaration (and reforms like it) are inattentive to the extent to which expert opinion is contested and controversial.

The Declaration is unlikely to resolve expert controversy, identify reliable expertise or change expert culture, at least in the manner envisaged. Requirement 6 demonstrates one of the major philosophical and practical difficulties with simplistic approaches to doctrinal formulations of method (like falsification) and, simultaneously, the limited value of the Declaration. Applying the Declaration introduces new interpretative choices. Should this article be understood as a falsification (or, in grossly simplified terms, debunking) of the Report and the reform agenda? After all, it impugns the survey instrument, the models of expertise, and the interpretation of the data. If it is not considered as some kind of falsification, then on what grounds is it not? If the authors (or others) were to mount a defence of the Report this would not simply vindicate or rehabilitate their work. If such a defence was not widely accepted or some commentators remained ambivalent, that state of affairs would only demonstrate how (the Declaration, and) what counts as the falsification (or even criticism) of an experiment, survey or knowledge claim, is often contested and not always resolved by appeals to method or objectivity.[146]

The specialist literatures on science and expertise — even those unconcerned with law — are overflowing with episodes of scientific controversy and examples of contested experiments or interpretations where different 'camps' defend the methodological propriety of their respective positions.[147] For some, like the historian Thomas Kuhn, it was the death of scientists rather than the identification of bias, allegations of methodological impropriety or persuasive demonstrations that changed the commitments held by individuals and research communities.[148] Bias is frequently enlisted in expert controversy but it does not seem to play a consistent role in resolving them.[149]

These examples illustrate how research undertaken in good faith can be compromised through methodological challenges which in turn might facilitate allegations of bias. If investigators working with the imprimatur of the AIJA can have their non-aligned survey and Report impugned, then how much easier will it be to challenge the opinions of aligned experts and even court-appointed experts in and around adversarial litigation. This article, and especially this section, has endeavoured to demonstrate how purportedly impartial research, sponsored by peak organisations and reviewed by a range of international scholars, judges and lawyers can be opened up and criticised. Strategically applying the proposed standards for expert performance to the authors' own efforts reinforces a range of methodological flaws and illustrates how the proposed solutions are unlikely to produce the kind of practices or cultural shifts desired.

8 CONCLUSION: ESCAPING THE FURROWS EMPIRICALLY

From the opening pages Perspectives is awkwardly situated, fluctuating between a narrow inquiry into judicial attitudes and a more comprehensive analysis of expert evidence and the roles played by experts. This ambiguity enables the authors and those using the survey to shift between (more) legitimate and illegitimate interpretive registers. Where the findings are qualified, the survey purports to offer insight into the perspectives of judges. Where the findings are expanded, judicial attitudes are substituted for an accurate description of legal reality. Making allowances for several qualifications and a few restrictive interpretations, Perspectives generates its value, its potential policy and reform purchase through its presentation (and reception) as a methodologically rigorous empirical description of the role of experts in contemporary litigation.[150] Unfortunately, such an interpretation seems inappropriate.

Perspectives is vitally concerned with its empirical status. The full title is Australian Judicial Perspectives on Expert Evidence: An Empirical Study. A lack of empirical information was proffered as one of the main reasons for funding and undertaking the study. This article, however, is intended to temper some of the enthusiasm attending the reception and over-reading of empirical legal research. Without wishing to cast aspersions on the importance of empirical investigation per se, we should be careful not to characterise (all) empirical studies as especially reliable or accurate descriptions of practice. When reading and drawing upon empirical research we ought to be attentive to the assumptions, methods and conclusions made by the investigators. Readers should be cautious about converting research findings into conclusions accorded the mantle of fact. They should be especially cautious about the results of questionnaires and judge-only surveys.

Answers to the question of whether 'bias' should be understood as a serious legal problem — and/or a problem requiring law reform — will depend on who is asked and the models of expertise (and law and bias) employed. The design of this questionnaire may have skewed the responses. We ought to wonder whether an inquiry into judicial attitudes to presupposed problems is the best way to investigate experts, 'expert culture' and expert evidence. Perhaps, given the approach, it is not surprising to find Perspectives merely rehearsing longstanding clichés about the parlous behaviour of partisan experts while remaining exceptionally deferential to judges.

For those who believe in the possibility of obtaining unbiased expertise the presence of bias may represent a very serious threat to legal institutions and social order. However, once we adopt more theoretically and empirically plausible models of expertise such simplistic models of bias and objectivity become both less tenable and less threatening. Only when we recognise that strong forms of objectivity are unattainable can we begin to craft more pragmatic means of identifying forms of expertise which are understood as adequate for the purposes of legal decision making. Unavoidably, there will be ongoing debates about the meaning of adequacy and appropriate standards of admissibility. In thinking about expertise, we need not be inattentive to the possible effects of forms of bias, whether deliberate or inadvertent, or alignment, degrees of (in)dependence and interests. However, by themselves these attributions are unlikely to afford definitive, consistent or reliable means of identifying or assessing expertise. Attributions of bias, neutrality, objectivity and independence are unlikely to produce bright-lines for demarcation or command consensus in understanding particular proffers of expertise. They do not provide a productive basis for substantive law reform.

Interestingly, virtually all recent sociological work in the area of law and science tends to reject any sharp bias–objective dichotomy as both theoretically and empirically implausible. More significantly, given the proposed reforms and the potential attributed to court-appointed experts, several commentators have stressed the value, for social democratic polities, of maintaining public forums where experts can disagree and be critically engaged.[151] While such celebrations may exaggerate the percentage of cases going to trial, the number of experts appearing in person, the social educative function of litigation and the ability of courts to systematically deconstruct (a theoretical version of something akin to making experts accountable) contemporary forms of expertise, the political importance of allowing experts to appear, disagree and respond to critical examination should not be underestimated. Greater judicial recourse to court-appointed experts, raised admissibility thresholds and standards of evaluation will tend to make judges and the legitimacy of legal institutions even more dependent on (and more vulnerable to) non-legal forms of expertise.[152]

Many of the images of expertise used in Perspectives, especially those underlying the reforms, are idealistic and occasionally nostalgic. Science, for example, is presented as monolithic. Scientific knowledge is contaminated by interests and bias. The definitions and boundaries around purportedly reliable and unreliable (or 'junk') expertise are presented as stable and identifiable.[153] All types of expertise are treated as if they were bound by similar values, methods, norms and commitments. The best scientists are presented as those reluctant to leave the laboratory or the operating theatre for the untidy realm of legal practice. Those who become professional witnesses are presented as among the worst. These highly simplistic images make no allowance for the actual practices of different types of expertise or changing political economies.[154]

The idea that good scientists (or experts) spend their entire lives in laboratories is misconceived. Increasingly, scientists are required to participate in public policy debates, are pressed to search for practical (often equated with profitable) applications for their work, linked through partnerships with industry, engaged in the popularisation of their disciplines and work as consultants. Transformations in the sciences are a function of globalisation, new (often private) funding arrangements, changes in policy directions as well as more localised institutional and professional developments — such as the rewards for publications and patents, along with the rise of the biosciences.[155] Interestingly, some of these developments — in particular commercialisation and relations with industry — may make the appearance of objectivity more difficult to sustain. Though, we should not underestimate the (perceived) value of putatively objective expert evidence to those endeavouring to publicly rationalise their decision making.

Perhaps it is no coincidence that transformations in the organisation of the sciences are contemporaneous with legal demands for greater detachment (in the form of independence and objectivity). Changes to the sciences, in conjunction with shifting public perceptions of expertise, may be contributing to the difficulty (judges experience) identifying reliable, let alone objective, expert opinions. The point is not that scientific practice was once asocial and objective.[156] Rather, the point is that many aspects of the scientific economy may have been less conspicuous, less politicised and less contentious. In the past it may have been easier to maintain the appearance of impartiality. Reforms and legal practice ought to be sensitive to the changing realities of scientific practice and public perceptions of expertise. Reforms should not be based on the professional rhetoric of scientists (or judges).[157] Rather, those advocating reform should be attentive to modern scientific practices (in the plural) and the socio-institutional contexts (again in the plural) in which modern forms of expertise manifest.

In making these points it is my intention to suggest that there is no Archimedean realm for judges to inhabit — in law or the sciences. Recognition that there are no neutral experts does not render all expertise unreliable or unhelpful, nor does it lead to abject relativism. Contrary to the claims of pro-corporate polemicists like Peter Huber, judging is inescapably and fundamentally political.[158] To adopt, as Huber and the authors of Perspectives advocate, a positivistic methodology is to structurally foreclose certain possibilities. Recourse to largely unexplicated categories such as 'partisan experts' and 'junk science' threaten to sweep judicial assumptions about expertise and much of their important epistemological work under the carpet.[159] Regardless of appearances, the act of deciding is political and, more disconcertingly (at least for judges and experts), open to politicisation. If there were simple ways to identify the essence of expertise or ascertain reliable knowledge none of this would matter. If we could consistently identify detrimental forms of bias (or even unbiased expertise) presumably Freckelton, Reddy and Selby would have explained how. All this only reinforces the fact that definitions, theories and models of the sciences and expertise make a difference. No matter how hard they scrub, no matter how forceful the disclaimers, judges will continue to have epistemological debris under their fingernails, if not all over their hands and judgments. To suggest that judges can avoid their very important (constitutive) role in the examination, assessment and legitimation of forms of expertise on the basis of recourse to simplistic positivist models, whether via a Declaration (or its equivalent) or through the US Daubert criteria, is not only naïve but politically risky.

Notwithstanding criticism of the survey instrument, which extended to the interpretation of the data and proposed reforms, this article should not be understood as some Panglossian defence of existing legal practice.[160] It is not intended to suggest that, as it stands (or stood), everything is rosy, or that certain types of expert practice, perhaps including forms of partisanship, do not cause acute problems for legal practice. Rather, this article has sought to destabilise some of the central claims in Perspectives. The many criticisms are not intended to suggest that we are unable to formulate more meaningful assessments, including meaningful empirically-based assessments, of the roles played by expertise in contemporary legal settings and beyond. It does imply that such assessments might require investigation which extends beyond the apex of the legal hierarchy.[161] Investigators should be genuinely interested in the valuable perspectives and experiences of non-judicial participants.

What then can we say about the state of our legal system and the roles played by experts within it? Unfortunately, we are not in a position to say a great deal. Asking the question, however, does bring us to a final assessment of the work of Freckelton, Reddy and Selby. Given the dearth of accessible information, should we embrace Perspectives on the grounds that it is 'the best we've got' or 'the only empirical inquiry' into judicial attitudes in Australia? While both claims might be trivially true, I leave it to the reader to decide whether they wish to embark on such a leaky boat. Perhaps a more apposite inquiry, given the questionable assumptions, interpretations and readings, is whether those using the Report are capable of avoiding the ideological furrows (re-)worn by its authors. The results so far are not particularly encouraging.

Perhaps more than ever before the issue of expertise warrants immediate and sustained empirical investigation.


[*] Faculty of Law, The University of New South Wales, Sydney 2052, Australia. E-mail: <g.edmond@unsw.edu.au

>. Earlier versions of this paper were presented in a special session on 'Science and Law' at the Australasian Association for the History, Philosophy and Social Studies of Science Annual Conference 2003, University of Melbourne, 30 June – 3 July 2003, at the Staff Seminar Series, Faculty of Law, University of Newcastle, August 2003 and at the 21st Annual Law and Society Conference, University of Newcastle, 8–10 December 2003. The author would like to thank those who commentated on drafts, along with the referees and editors.

[1] Similar issues arise in relation to the recent reforms to Federal Court and NSW Supreme Court procedures: Practice Direction: Guidelines for Expert Witnesses in Proceedings in the Federal Court of Australia 2003 (Cth) and the 'Expert Witness Code of Conduct' in the Supreme Court Rules 1970 (NSW) sch K.

[2] Ian Freckelton, Prasuna Reddy and Hugh Selby, Australian Judicial Perspectives on Expert Evidence: An Empirical Study (1999) and Ian Freckelton, Prasuna Reddy and Hugh Selby, Australian Magistrates' Perspectives on Expert Evidence: A Comparative Study (2001). This article is primarily focused on the survey of judges, though most of the comments are applicable to the subsequent survey of magistrates. References to Australian Judicial Perspectives are abbreviated as 'AJP'.

[3] AJP, above n 2, 16.

[4] Ibid.

[5] The reader is told that the opinions of other participants will be examined at some future stage.

[6] AJP, above n 2, xi–ii.

[7] Ibid 1.

[8] This article does not attempt to systematically analyse the judicial comments collected with the questionnaire. Primarily because: the multiple-choice questions and answers appear to be the principal basis for the empirical assertions in the text, and the basis for all numerical claims; the comments are not available to readers whereas the assessment of the prefigured responses and data provide materials which are readily accessible; and finally because there is no obvious methodology associated with the presentation of judicial comments. In general, the comments are used to confirm or reinforce, and occasionally qualify, results. We are not always told how representative particular comments are, nor how many judges actually produced written comments (to particular questions).

[9] AJP, above n 2, 7.

[10] Ibid 38.

[11] Steve Woolgar and Dorothy Pawluch, 'Ontological Gerrymandering: The Anatomy of Social Problems Explanations' (1985) 32 Social Problems 214. One caveat. It might be thought that there is a certain irony in criticising the work of others, especially where it involves the identification of social problems, by attributing methodological inadequacies (or problems) to their own practices. These kinds of debates, concerned with manipulating the existence, identity and extent of social problems, have been described by Woolgar and Pawluch as forms of ontological gerrymandering. The concerns espoused by Woolgar and Pawluch might have been (more) pertinent if this article was purely theoretical or simply rejected the existence of problems — including problems with bias — associated with the use of expertise. But it does neither of these things. The article does not attempt to suggest that the use of expertise is without difficulties. And, quite intentionally, it reinforces the importance of methodologically competent social scientific research and the value of incorporating perspectives from research traditions beyond conventional areas of legal scholarship.

[12] AJP, above n 2, 142. The questions and responses are set out in appendices to the Report.

[13] Ibid 143–4.

[14] Interestingly, those people who actually interview and study scientists (and experts) have not consistently identified forms of bias (or partisanship) as serious problems in the practice of science. Rather, studies of scientists suggest that the simplistic normative models of science (or institutional imperatives), conventionally and most famously associated with Robert Merton — such as commun(al)ism, disinterestedness, organised skepticism and universalism — do not adequately capture or explain scientific practice. Indeed, studies by Mitroff indicate that scientists may actually value 'counter-norms' — such as dogged commitment to ideas, even in the face of apparently disconfirming evidence. Further, more recent work on norms and rule following (inspired by Wittgenstein) would seem to indicate that even where strong norms or prescriptive rules exist, the rules rarely explain how they should be applied and this enables a range of divergent — even competing — interpretations. Consequently, norms like disinterestedness or skepticism are practically incapable of guiding practice in any specific context, especially in areas of uncertainty and controversy. See Robert Merton, The Sociology of Science: Theoretical and Empirical Investigations (1973); Ian Mitroff, The Subjective Side of Science: A Philosophical Inquiry into the Psychology of the Apollo Moon Scientists (1974); Harry Collins and Trevor Pinch, Frames of Meaning: The Social Construction of Extraordinary Science (1982); Michael Mulkay, 'Interpretation and the Use of Rules: The Case of the Norms of Science' in Thomas Gieryn (ed), Science and Social Structure: A Festschrift for Robert Merton (1980) 111.

[15] AJP, above n 2, 38. See also at 12, 29.

[16] Ibid 113.

[17] There is repetition in the treatment of bias and partisanship (Questions 2.2, 2.3, 2.11, 6.8 and 6.9). This may help to explain why judges 'raised the issue more than once'. Furthermore, there is a considerable literature on the ordering of questions, the use of repetition and how early questions may stimulate or structure the responses to subsequent questions. See, eg, Jon Krosnick and Duane Alwin, 'An Evaluation of a Cognitive Theory of Response-Order Effects in Survey Measurement' (1987) 51 Public Opinion Quarterly 201; William Locander and John Burton, 'The Effect of Question Form on Gathering Income Data by Telephone' (1976) 13 Journal of Marketing Research 189.

[18] 'The judges' identification of clarity of explanation as the single most important factor in the oral delivery of evidence highlighted the importance of expert witnesses developing communication skills of a high order if they are to supply to decision-makers information with which they are able to deal adequately': AJP, above n 2, 116 (emphasis added).

[19] Ibid 143.

[20] Ibid 146.

[21] Ibid 13, 40.

[22] Research suggests that respondents are more likely to endorse answers incorporated with the survey than supply their own. See William Belson and Judith Duncan, 'A Comparison of the Check-list and the Open Response Questioning Systems' (1962) 11 Applied Statistics 120; Howard Schuman and Stanley Presser, Questions and Answers in Attitude Surveys: Experiments on Question Form, Wording and Context (1981); Peter Hiller, 'The Subjective Dimensions of Social Stratification: The Case of the Self-Identification Question' (1973) 9(2) Australian and New Zealand Journal of Sociology 14; Aravind Joshi, 'Varieties of Cooperative Responses in Question–Answer Systems' in Ferenc Kiefer (ed), Questions and Answers (1983) 229.

[23] AJP, above n 2, 154.

[24] A sizeable portion did, however, identify 'bias' as some kind of problem. This raises several issues, including whether or not: this is an artifact of the survey; can be explained in other ways; and (depending on how we define bias) represents an incorrigible feature of (the representation of) expertise.

[25] AJP, above n 2, 26. Presented as non-controversial or self-evident, the categories 'bias', 'partisanship' and 'lack of objectivity' appear repeatedly throughout Perspectives. The failure to clearly define bias (contrast AJP, above n 2, 24 n 3) and the extent to which biased expertise can be identified or remedied are serious oversights. On definitional problems consider Renate Mayntz, Kurt Holm and Roger Huebner, Introduction to Empirical Sociology (A Hammond, H Davis and D Shapiro trans, 1976 ed) 7–21 [trans of: Einführung in die Methoden der empirischen Soziologie]; Patrick McNeill, Research Methods (2nd ed, 1990) 23–5.

[26] AJP, above n 2, 144.

[27] Paul Lazarsfeld, 'Evidence and Inference in Social Research' (1958) 87(4) Daedalus 99. For a more general discussion of some of the implications of classification, consider Geoffrey Bowker and Susan Star, Sorting Things Out: Classification and Its Consequences (1999).

[28] AJP, above n 2, 36–8, 60. However, we should not assume that these categories are independent.

[29] Ibid 3, 25–6. Intriguingly, more respondents reported encountering bias (Question 2.2, 92%: at 143–4) than partisanship (Question 6.8, 85%: at 154).

[30] There are several comments about costs but no questions in the survey were directed to issues of cost. There are questions about taking experts out of play, but this is a very small part of the larger question about expert availability. See Carol Jones, Expert Witnesses: Science, Medicine and the Practice of Law (1994) 128–64.

[31] AJP, above n 2, 3. Numerous writers have commented upon how survey respondents do not necessarily agree on the meaning of commonplace terms. See William Belson, The Design and Understanding of Survey Questions (1981); Robert Nuckols, 'A Note on Pre-testing Public Opinion Questions' (1953) 37 Journal of Applied Psychology 119; above n 25.

[32] One possible explanation is that 'bias' is perceived as a problem for jurors. However we are provided with no reliable information about the ability of juries (or judges) to identify or assess the significance of 'bias'.

[33] Some social scientists might contend that questions pertaining to 'bias' should follow questions about judicial use of the rules of evidence. Adopting such an approach, questions about actual practice and the use of rules of evidence ought to have preceded and 'filtered' reflection on apparent problems. It is widely accepted that survey responses are shaped by question order effects. Consider the discussion of filtering in William Foddy, Constructing Questions for Interviews and Questionnaires: Theory and Practice in Social Research (1993) 101–11.

[34] AJP, above n 2, 155–6.

[35] Evidence Act 1995 (Cth); Evidence Act 1995 (NSW); Evidence Act 2001 (Tas).

[36] AJP, above n 2, 116.

[37] Ibid 88–9. This rule was substantially abolished in the Uniform Evidence Acts s 80.

[38] AJP, above n 2, 159.

[39] Joe Cecil and Thomas Willging, Court-Appointed Experts: Defining the Role of Experts Appointed under Federal Rule of Evidence 706 (1993); Laural Hooper, Joe Cecil and Thomas Willging, 'Assessing Causation in Breast Implant Litigation: The Role of Science Panels' (2001) 64 Law and Contemporary Problems 139; Michael Saks, 'The Phantom of the Courthouse' (1995) 35 Jurimetrics Journal 233.

[40] Frank Turner, Contesting Cultural Authority: Essays in Victorian Intellectual Life (1993) 201; Andrew Abbott, The System of Professions: An Essay on the Division of Expert Labor (1988).

[41] Stanley Cohen, Visions of Social Control: Crime, Punishment and Classification (1985); Joseph Gusfield, The Culture of Public Problems: Drinking-Driving and the Symbolic Order (1981).

[42] Gary Edmond, 'The Law-Set: The Legal–Scientific Production of Medical Propriety' (2001) 26 Science, Technology and Human Values 191.

[43] This shift of agency operates as a kind of supererogation, where experts are burdened with (some of) the 'sins' of the legal system. Consider Gary Edmond, 'Constructing Miscarriages of Justice: Misunderstanding Scientific Evidence in High Profile Criminal Appeals' (2002) 22 Oxford Journal of Legal Studies 53.

[44] Alfred Schutz, 'Phenomenology and the Social Sciences' in Collected Papers (1962) vol 1, 118–39.

[45] 'Gaps' between public accounting (what people say they do or believe) and actual behaviours (what they actually do or believe) are notorious in social scientific research. McNeill makes the point that:

The interview schedule or questionnaire means that the researcher is setting limits to what the respondents can say. … Fundamentally, the survey [or questionnaire] method finds out what people will say when they are being interviewed, or filling in a questionnaire. This may not be the same thing as what they actually think or do.

McNeill, above n 25, 47; Irwin Deutscher, What We Say/What We Do: Sentiments and Acts (1973).

[46] AJP, above n 2, 16.

[47] Compare Jonathan Potter, Representing Reality: Discourse, Rhetoric and Social Construction (1996).

[48] Contrast the situatedness of all knowledge developed by Donna Haraway, Simians, Cyborgs and Women: The Reinvention of Nature (1991).

[49] AJP, above n 2, 1 (emphasis added).

[50] The Report indicates that this survey represents the initial stages of a more comprehensive enterprise. Without wanting to imply that a more expansive and diversified inquiry would repair the methodological limitations, the initial inquiry discloses an exclusive interest in the attitudes of judges and magistrates.

[51] Roland Barthes, Mythologies (Annette Lavers trans, 1972 ed) [trans of: Mythologues].

[52] Questions about whether judges should dominate law reform, or supply the 'empirical' evidence for law reform, are fundamentally political.

[53] AJP, above n 2, 1.

[54] Ibid 1–2.

[55] Ibid 2.

[56] Ibid 13.

[57] Schuman and Presser, above n 22; Tom Smith, 'Non-Attitudes: A Review and Evaluation' in Charles Turner and Elizabeth Martin (eds), Surveying Subjective Phenomena (1984) vol 2, 215.

[58] Ian Freckelton, 'Judicial Attitudes Toward Scientific Evidence: The Antipodean Experience' (1997) 30 University of California Davis Law Review 1137, 1212.

[59] AJP, above n 2, 65 (emphasis added).

[60] Ibid 43.

[61] Ibid 66. This makes a discussion, which criticises mock jury research, especially ironic. The authors note that investigations using mock jurors 'are open to criticism in terms of their methodology, their selection procedures, their cultural matrix, and the fact that most used mock, rather than real, jurors': at 45. These other inquiries, however flawed, did at least target jurors and mock jurors. Furthermore, the authors make few attempts to explain (in)consistencies between judicial perspectives and the claims promoted in other parts of the Report. For example, while answers provided by judges do not seem to support the abolition of the lay jury, elsewhere in the Report (at 38, 43–5) more critical approaches to jury (and implicitly judicial) competence are espoused and apparently endorsed.

[62] Ibid 66. On the power of quantification, see Theodore Porter, Trust in Numbers: The Pursuit of Objectivity in Science and Public Life (1995); Walter Williams, Honest Numbers and Democracy: Social Policy Analysis in the White House, Congress and the Federal Agencies (1998).

[63] See Harry Kalven and Hans Zeisel, The American Jury (1966).

[64] AJP, above n 2, 150.

[65] Other models of expertise, understanding and competence might be used to nuance, and even defend, the jury (and judges). Consider, for example, recent research into the public understanding of science (PUS) by Alan Irwin and Brian Wynne (eds), Misunderstanding Science: The Public Reconstruction of Science and Technology (1996); Alan Irwin, Citizen Science: A Study of People, Expertise and Sustainable Development (1995); Harry Collins and Robert Evans, 'The Third Wave of Science Studies: Studies of Expertise and Experience' (2002) 32 Social Studies of Science 235; Alan Irwin, 'Expertise and Experience in the Governance of Science: What is Public Participation for?' in Gary Edmond (ed), Expertise in Regulation and Law (2004) 32 and more general approaches to lay understanding such as Nicholas Abercrombie and Brian Longhurst, Audiences: A Sociological Theory of Performance and Imagination (1998); James Scott, Domination and the Arts of Resistance: Hidden Transcripts (1990).

[66] The sociologist Collins has argued that sometimes when things which are intended as demonstrations — deliberately staged to restrict the extent of interpretative flexibility — 'go wrong' they revert to the status of experiment — where the level of ambiguity is much greater. If we problematise some of the assumptions underlying the survey — such as the sufficiency of judicial perspectives or appropriateness of answering some of the questions — then this demonstration of the real, via judicial attitudes, can be understood as an experiment into judicial aptitudes and/or professional interests. See Harry Collins, 'Public Experiments and Displays of Virtuosity: The Core-Set Revisited' (1988) 18 Social Studies of Science 725.

[67] Jean Converse, 'Predicting No Opinion in the Polls' (1976) 40 Public Opinion Quarterly 515; Howard Schuman and Stanley Presser, 'Public Opinion and Public Ignorance: The Fine Line Between Attitudes and Non-Attitudes' (1980) 85 American Journal of Sociology 1214; George Bishop et al, 'Pseudo-Opinions on Public Affairs' (1980) 44 Public Opinion Quarterly 198; Clyde Coombs and Lolagene Coombs, '"Don't Know": Item Ambiguity or Respondent Uncertainty' (1976) 40 Public Opinion Quarterly 497; Jon Krosnick et al, 'The Impact of "No Opinion" Response Options on Data Quality: Non-Attitude Reduction or an Invitation to Satisfice?' (2002) 66 Public Opinion Quarterly 371.

[68] AJP, above n 2, 65.

[69] Ibid 2 (emphasis added).

[70] Ibid 111.

[71] Ibid 144. See also at 40.

[72] Ibid 145.

[73] Ibid 150.

[74] Ibid 151.

[75] Ibid 40 (emphasis added).

[76] Ibid 6–7. See Harrington-Smith on behalf of the Wongatha People v Western Australia (No 2) ('Wongatha' Judgment No 7) [2003] FCA 893; (2003) 130 FCR 424, 427–8 [19]; Jango v Northern Territory (No 2) [2004] FCA 1004 [9].

[77] Ibid 113.

[78] Ibid 145.

[79] Ibid 145–6.

[80] Ibid 55 (emphasis added).

[81] Ibid 116. Several sociologists characterise representations of complexity and uncertainty as symbolic forms of action: Brian Campbell, 'Uncertainty as Symbolic Action in Disputes Among Experts' (1985) 15 Social Studies of Science 429; Susan Star, 'Scientific Work and Uncertainty' (1985) 15 Social Studies of Science 391.

[82] Compare Jerome Ravetz, Scientific Knowledge and Its Social Problems (1971).

[83] Clifford Geertz, 'Thick Description: Toward an Interpretative Theory of Culture' in The Interpretation of Cultures: Selected Essays (1973) 3.

[84] In the influential Access to Justice report, Lord Woolf appeared to recognise the continuing need for 'red-blooded adversarialism', albeit in fewer and fewer cases: Lord Woolf, Access to Justice (1996) 138–9.

[85] Gary Edmond, 'Judicial Representations of Scientific Evidence' (2000) 63 Modern Law Review 216.

[86] AJP, above n 2, 2. See also at 7, 12.

[87] Ibid 10 (emphasis added).

[88] Ibid 13 (emphasis added). See also at 70.

[89] Barry Barnes, Understanding Agency: Social Theory and Responsible Action (2000).

[90] AJP, above n 2, 57.

[91] Ibid 10–12, 113–15.

[92] Ibid 10, 113.

[93] Ibid 113.

[94] Ibid 10, 113.

[95] Lord Woolf, above n 84.

[96] AJP, above n 2, 57.

[97] Ibid 149.

[98] Ibid. Although, even combined (at 37%), these were lower than the number of 'Missing responses' (38%).

[99] Ibid 5–6.

[100] Ibid 149.

[101] Ibid 61.

[102] Ibid 12.

[103] Ibid 149–50 (Question 4.5).

[104] Ibid 63. See also at 12.

[105] Ibid 116–17.

[106] This strategy was extended to expert evidence (not explicitly intended for litigation) by Von Doussa J in the Hindmarsh Island Bridge litigation: Chapman v Luminis Pty Ltd [2001] FCA 1106; (2001) 123 FCR 62. See Gary Edmond, 'Thick Decisions: Expertise, Advocacy and Reasonableness in the Federal Court of Australia' (2004) 74 Oceania 190.

[107] Steven Connor, Dumbstruck: A Cultural History of Ventriloquism (2000).

[108] Historical work on the role of experts in court demonstrates their tremendous value as well as the negotiations around formal legal recognition and definitions of reliability and competence. Historical accounts frequently suggest close and continuing relations between courts and fields of expertise, particularly psychology, medicine and the forensic sciences, see Jones, above n 30; Tal Golan and Snait Gissis (eds), 'Science and Law' (1999) 12 Science in Context 3; Jennifer Mnookin, 'The Image of Truth: Photographic Evidence and the Power of Analogy (1998) 10 Yale Journal of Law and the Humanities 1; Simon Cole, Suspect Identities: A History of Fingerprinting and Criminal Identification (2001); Roger Smith and Brian Wynne (eds), Expert Evidence: Interpreting Science in the Law (1989); Tal Golan, Laws of Men and Laws of Nature: The History of Scientific Expert Testimony in England and America (2004).

[109] AJP, above n 2, 117 (emphasis added).

[110] Ibid 118 (emphasis added).

[111] Ibid.

[112] Ibid 149.

[113] Ibid 160.

[114] John Sink, 'The Unused Power of a Federal Judge to Call His Own Expert Witness' (1956) 29 Southern California Law Review 195; Tahirih Lee, 'Court-Appointed Experts and Judicial Reluctance: A Proposal to Amend Rule 706 of the Federal Rules of Evidence' (1988) 6 Yale Law and Policy Review 480.

[115] AJP, above n 2, 17.

[116] Ibid 104.

[117] Ibid 156–7.

[118] Ibid 113.

[119] Ibid 118.

[120] Consider also Freckelton, above n 58, 1212:

The project is timely because of the assertions advanced in some quarters that Australia's commitment to the adversary process needs to be revisited, in part, on the basis that the community cannot underwrite its cost and on the basis that juries can no longer cope with the complexities and conflicting nature of modern-day expert evidence.

[121] Some of these issues have been canvassed more extensively in Gary Edmond, 'After Objectivity: Expert Evidence and Procedural Reform' [2003] SydLawRw 8; (2003) 25 Sydney Law Review 131 and Gary Edmond and David Mercer, 'Experts and Expertise in Legal and Regulatory Settings' in Gary Edmond (ed), Expertise in Regulation and Law (2004) 1.

[122] Interestingly, where judges comment on the partisanship of experts they may compromise their own independence. See Vakauta v Kelly [1989] HCA 44; (1989) 167 CLR 568.

[123] These are not new observations. In Lord Abinger v Ashton (1873) 17 LR Eq 358, 374 Sir George Jessel MR wrote

'in matters of opinion I very much distrust expert evidence, for several reasons. In the first place, although the evidence is given upon oath, in point of fact the person knows he cannot be indicted for perjury, because it is only evidence as to a matter of opinion. So that you have not the authority of legal sanction.'

[124] Carl Cranor and David Eastmond, 'Scientific Ignorance and Reliable Patterns of Evidence in Toxic Tort Causation: Is There a Need for Liability Reform?' (2001) 64 Law and Contemporary Problems 5.

[125] AJP, above n 2, 23–9; Learned Hand, 'Historical and Practical Considerations Regarding Expert Testimony' (1901) 15 Harvard Law Review 40; Jack Weinstein, 'Improving Expert Testimony' (1986) 20 University of Richmond Law Review 473, 482; Margaret Hagen, Whores of the Court: The Fraud of Psychiatric Testimony and the Rape of American Justice (1997).

[126] While not defined, the authors repeatedly insinuate an image of 'reliability', similar to that promoted in a seminal US Supreme Court decision (Daubert v Merrell Dow Pharmaceuticals, Inc, [1993] USSC 99; 509 US 579 (1993) ('Daubert')), notwithstanding apparent wholesale rejection by the surveyed judges (AJP, above n 2, 8, 75-79, 91, 93, 101, 115). More than 50 per cent of judges did not consider 'reliability' to be a prerequisite for the admission of expert evidence (Question 6.2) and fewer than 20 per cent considered falsification necessary for determining reliability (Question 6.3). See also Ian Freckelton and Hugh Selby, The Law of Expert Evidence (1999) 84, 267, 338–9; Ian Freckelton, 'The Challenge of Junk Psychiatry, Psychology and Science: The Evolving Role of the Forensic Expert' in Hugh Selby (ed), Tomorrow's Law (1995) 52; Ian Freckelton, 'Contemporary Comment: When Plight Makes Right—The Forensic Abuse Syndrome' (1994) 18 Criminal Law Journal 29. Compare Gary Edmond and David Mercer, 'Keeping "Junk" History, Philosophy and Sociology of Science Out of the Courtroom: Problems with the Reception of Daubert v Merrell Dow Pharmaceuticals Inc.' [1997] UNSWLawJl 13; (1997) 20 University of New South Wales Law Journal 48.

[127] AJP, above n 2, 115. Two influential and politically conservative US commentators also advocate the use of 'mainstream science': Kenneth Foster and Peter Huber, Judging Science: Scientific Knowledge and the Federal Courts (1997). Huber is a senior fellow of the Manhattan Institute, a politically conservative pro-business think tank. Compare Gary Edmond and David Mercer, 'Juggling Science: From Polemic to Pastiche' (1999) 13 Social Epistemology 215.

[128] Gary Edmond and David Mercer, 'Daubert and the Exclusionary Ethos: The Convergence of Corporate and Judicial Attitudes Towards the Admissibility of Expert Evidence in Tort Litigation' (2004) 26 Law and Policy 231.

[129] AJP, above n 2, 115.

[130] While interests are a useful means of interpreting social action they are analytical constructs and do not necessarily provide a direct correlation with actual motivations or the reasons for action. Some of the limits with interest explanations are canvassed by Barry Hindess, '"Interests" in Political Analysis' in John Law (ed), Power, Action and Belief: A New Sociology of Knowledge? (1986) 112; Steve Woolgar, 'Interests and Explanation in the Social Study of Science' (1981) 11 Social Studies of Science 365; Steven Yearly, 'The Relationship Between Epistemological and Sociological Cognitive Interests: Some Ambiguities Underlying the Use of Interest Theory in the Study of Scientific Knowledge' (1982) 13 Studies in the History and Philosophy of Science 353.

[131] Compare Gary Edmond and David Mercer, 'Trashing "Junk" Science' (1998) Stanford Technology Law Review 3 <http://stlr.stanford.edu/STLR/Articles/98_STLR_3/index.htm> at 24 February 2005.

[132] AJP, above n 2, 59.

[133] Kumho Tire Co v Carmichael, [1999] USSC 19; 526 US 137 (1999); Daubert v Merrell Dow Pharmaceuticals, Inc, [1995] USCA9 8; 43 F 3d 1311 (1995). The second case was the decision of the Fifth Circuit Court of Appeal on remand. Citing Peter Huber, Galileo's Revenge: Junk Science in the Courtroom (1991), the Court of Appeal adopted a reading of Daubert [1993] USSC 99; 509 US 579 (1993) which made it significantly more onerous for plaintiffs to adduce expert evidence.

[134] Consider the activities of experts in David Mercer, 'Hyper-Experts and the Vertical Integration of Expertise in EMF/RF Litigation' in Gary Edmond (ed), Expertise in Regulation and Law (2004) 85.

[135] One question from the survey provides some indication of the practical utility of protocols (like the Declaration) and duty statements in situ. The answers to Question 8.4, directed to certification of expert reports under s 177 of the Uniform Evidence Acts, would appear to suggest that such reforms are perceived as irrelevant or largely ineffective. These implications remained undeveloped in the Report.

[136] See Michael Lynch, 'The Discursive Production of Uncertainty: The OJ Simpson "Dream Team" and the Sociology of Knowledge Machine' (1998) 28 Social Studies of Science 829; Edmond, above n 42; Stefan Timmermans and Marc Berg, The Gold Standard: The Challenge of Evidence-Based Medicine and Standardization in Health Care (2003) 94–104.

[137] AJP, above n 2, 1 (emphasis added).

[138] This approach can be characterised as naïve empiricism: a general perspective dismissed as implausible in virtually all texts of the social sciences, including introductory textbooks on methods. Consider the widely cited introductory work by Alan Chalmers on the limits of inductivism, What is This Thing Called Science? (2nd ed, 1982) 1–21. As McNeill, above n 25, 128, explains:

"Data" means, literally, "things that are given", i.e. there, waiting to be found. It assumes a positivist view of the world. But if knowledge is created and constructed, then data is not "given", but produced … Every research method is a means of producing knowledge, not collecting it. None simply records "the facts" or "the truth" as an external object.

[139] Mayntz et al, above n 25, 23. According to these authors, who were writing in the 1960s,

[i]t is now virtually undisputed that research without some theoretical basis is not only unfruitful but downright impossible.' They continue: 'If radical empiricists have questioned the dependence of research on theory it has been because they had too narrow a concept of theory and did not recognise that, in the social sciences, delineating the subject and attaching names to social realities are themselves theoretical decisions.

[140] Another way of assessing the choice of methods is to compare Australian Judicial Perspectives with the discussion of surveys outlined in Freckelton and Selby's leading textbook on expert evidence The Law of Expert Evidence (1999) 110–48, especially 142ff. An examination of the principles and methods from The Law of Expert Evidence might be used to suggest that Perspectives does not conform to the principles required for legally admissible survey work. Perspectives would appear, for example, to encounter difficulties with the following: '(1) A representative cross-section … must be interviewed … (4) Leading questions must not be posed. … (12) Interviewers should be experienced … (13) Interviewers should not depart from prescribed procedures': at 143.

[141] The value of including other subjects is that they provide a form of triangulation which may or may not support the authors' one dimensional investigation (of judges and magistrates). If the authors had considered experts, for example, they may well have produced a range of results which were not consistent, or easily reconciled, with the answers provided by judges and magistrates. Such results might have destabilised the findings and required further investigation or more sophisticated interpretive techniques.

[142] For a critical overview of several research methods see Emile Durkheim, Suicide: A Study in Sociology (1897); Aaron Cicourel, Method and Measurement in Sociology (1964) 7–120; Ray Pawson, A Measure for Measures: A Manifesto for Empirical Sociology (1989); Charles Briggs, Learning How to Ask: A Sociolinguistic Appraisal of the Interview in Social Science Research (1986). More specifically, consider Colin Robson, Real World Research: A Resource for Social Scientists and Practitioner-Researchers (2nd ed, 2002) 125–7, 243. This is not intended to suggest that other forms of inquiry are without difficulties but to reinforce limitations inherent in questionnaires, see also Martin Hammersley, What's Wrong with Ethnography: Methodological Explorations (1992) 96–122, 159–73; Bent Flyvbjerg, Making Social Science Matter: Why Social Inquiry Fails and How it Can Succeed Again (Steven Sampson trans, 2001).

[143] John Campbell, Richard Daft and Charles Hulin, What to Study: Generating and Developing Research Questions (1982) 97-103. Campbell et al contend that without appropriate theory research may be easier and faster but the results may have little value: at 102.

[144] Several research traditions, including phenomenology and ethnomethodology — inspired by Garfinkel and Sacks — focusing on contextual inter-subjective meaning are exceptionally critical of survey research. See Harold Garfinkel, Studies in Ethnomethodology (1967, reprinted 1996); Harvey Sacks, Lectures on Conversation (1998); Cicourel, above n 142.

[145] For an overview of recent sociological writings in the field of law and science, see David Mercer, 'The Intersection of Sociology of Scientific Knowledge (SSK) and Law: Some Themes and Policy Reflections' (2002) 6 Law Text Culture 137; above n 65.

[146] Harry Collins, Changing Order: Replication and Induction in Scientific Practice (1985); Michael Mulkay and Nigel Gilbert, 'Putting Philosophy to Work: Karl Popper's Influence on Scientific Practice' (1981) 11 Philosophy of the Social Sciences 389.

[147] Accessible examples of scientific controversy studies include Harry Collins and Trevor Pinch, The Golem: What Everyone Should Know About Science (1993) and The Golem At Large: What You Should Know About Technology (1998).

[148] Thomas Kuhn, The Structure of Scientific Revolutions (2nd ed, 1970) 144–59; Harry Collins, Gravity's Shadow: The Search for Gravitational Waves (2004).

[149] Randal Albury, The Politics of Objectivity (1983); Robert Proctor, Value-Free Science? Purity and Power in Modern Knowledge (1991); Bruno Latour, Science in Action: How to Follow Scientists and Engineers Through Society (1987); Helen Longino, Science as Social Knowledge: Values and Objectivity in Scientific Inquiry (1990).

[150] Prominent citations include Australian Law Reform Commission, Review of the Federal Civil Justice System, Discussion Paper No 62 (1999) 1.8, 13.21, 13.22; Australian Law Reform Commision, Managing Justice: A Review of the Federal Civil Justice System, Report No 89 (1999) 6.76, 6.92, 6.95, 6.111, 6.113, 8.161; New South Wales Law Reform Commission, Expert Witnesses, Issues Paper No 25 (2004); Justice Alan Abadee, 'Professional Negligence Litigation: A New Order in Civil Litigation — the Role of Experts in a New Legal World and in a New Millennium' (Paper presented at the Australian College of Legal Medicine, Canberra, 16 October 1999); Justice Alan Abadee, 'The Expert Witness in the New Millennium' (Paper presented at the 2nd Annual Scientific Meeting of General Surgeons Australia, Sydney, 2 September 2000); Justice Alan Abadee, 'Update on the Professional Negligence List and Expert Evidence: Changes for the Future' (Paper presented at the Australian Plaintiff Lawyers Association Branch Conference, Sydney, 3 March 2000); Justice Harold Sperling, 'Expert Evidence: The Problem of Bias and Other Things' (2000) 4 Judicial Review 429; Justice James Wood, 'Expert Witnesses — The New Era' (Paper presented at the 8th Greek Australian International Legal and Medical Conference, Corfu, June 2001); Re W and W (Abuse Allegations; Expert Evidence) [2001] FamCA 216; (2001) 164 FLR 18.

[151] Smith and Wynne, above n 108; Sheila Jasanoff, Science at the Bar: Law, Science and Technology in America (1995) 215–17.

[152] Gary Edmond, 'Capturing the Courts: Public Science and the Subversion of Legal Autonomy' (unpublished).

[153] Compare Thomas Gieryn, Cultural Boundaries of Science: Credibility on the Line (1999).

[154] In recent decades, judges have adopted more interventionist, even managerial, roles and procedures. In some jurisdictions, such as the US, cases are frequently disposed of through pre-trial judicial decision making about the admissibility of expertise. Judicial definitions and interpretations of science, expertise, and models of causation can determine not only the outcome of particular cases but the shape of entire congregations (of cases), and extend to influence professional practice, research orientations and even the publication of results. See Gary Edmond and David Mercer, 'Litigation Life: Law–Science Knowledge Construction in (Bendectin) Mass Toxic Tort Litigation' (2000) 30 Social Studies of Science 265; Edmond and Mercer, above n 131.

[155] Philip Mirowski and Esther-Mirjam Sent (eds), Science Bought and Sold: Essays in the Economics of Science (2002).

[156] Lorraine Daston and Peter Galison, 'The Image of Objectivity' (1992) 40 Representations 81; Steven Shapin, 'Cordelia's Love: Credibility and the Social Studies of Science' (1995) 3 Perspectives on Science 255; Steven Shapin, A Social History of Truth: Civility and Science in Seventeenth-Century England (1994).

[157] John Schuster and Richard Yeo (eds), The Politics and Rhetoric of Scientific Method: Historical Studies (1986); Nigel Gilbert and Michael Mulkay, Opening Pandora's Box: A Sociological Analysis of Scientists' Discourse (1984); Timothy Lenoir, Instituting Science: The Cultural Production of Scientific Disciplines (1997); Turner, above n 40.

[158] Huber, above n 133. For a critical discussion of Huber's work consider Edmond and Mercer, above n 131.

[159] The adoption of more realistic models of science and expertise raises important questions for those studying and practicing politics, regulation and law. Consider Yaron Ezrahi, The Descent of Icarus: Science and the Transformation of Contemporary Democracy (1990); Richard Sclove, Democracy and Technology (1995); Stephen Hilgartner, Science on Stage: Expert Advice as Public Drama (2000).

[160] Unfortunately, such criticisms often appear to support the status quo, see William Haltom and Michael McCann, Distorting the Law: Politics, Media and the Litigation Crisis (2004) 76, 106, 289–90; Michael Saks, 'Public Opinion About the Civil Jury: Can Reality Be Found in the Illusions?' (1998) 48 DePaul Law Review 221, 234–5.

[161] More attention ought to be devoted to the roles played by experts in the very large number of disputes that do not go to trial, see Richard Miller and Austin Sarat, 'Grievances, Claims, and Disputes: Assessing the Adversary Culture' (1980–81) 15 Law and Society Review 525.


AustLII: Copyright Policy | Disclaimers | Privacy Policy | Feedback
URL: http://www.austlii.edu.au/au/journals/FedLawRw/2005/4.html