AustLII Home | Databases | WorldLII | Search | Feedback

Journal of Law, Information and Science

Journal of Law, Information and Science (JLIS)
You are here:  AustLII >> Databases >> Journal of Law, Information and Science >> 2007 >> [2007] JlLawInfoSci 2

Database Search | Name Search | Recent Articles | Noteup | LawCite | Help

Clarke, Roger; Maurushat, Alana --- "Passing the Buck: Who Will Bear the Financial Transaction Losses from Consumer Device Insecurity?" [2007] JlLawInfoSci 2; (2007) 18 Journal of Law, Information and Science 8

Passing the Buck: Who Will Bear the Financial Transaction Losses from Consumer Device Insecurity?

Roger Clarke and Alana Maurushat[*]

Abstract

Internet-connected devices offer convenience and flexibility to consumers to perform tasks online, ranging from shopping to streaming videos to banking. Such activities are increasingly becoming an integral part of many people’s lives. Consumers rely on connected devices, in particular personal computers and mobile phones, to transact online. Unfortunately, there has been a surge of unauthorised banking transactions, some through the proliferation of computer malware (malicious software) making online transactions less secure. Many of these transactions are financially risky, particularly those that involve payment. Many jurisdictions, including Australia and New Zealand, are amending their banking codes to provide a new allocation of liability for unauthorised online transactions, in particular where computer devices are used in a transaction. The new liability regimes shift liability from the bank to the consumer where computer devices are insufficiently secure. The financial institutions’ argument is predicated on the assumption that consumers are capable of taking responsibility for the security of the devices that they use. The nature of consumer devices is such that it is entirely infeasible to impose responsibility on consumers in the manner that banks desire. Indeed, many eCommerce and even eBanking services only work because they exploit vulnerabilities on consumer devices. This paper surveys security threats and vulnerabilities of consumer devices. It assesses the effectiveness of available technical safeguards and the practicability of imposing responsibilities on consumers to understand the risks involved, to install relevant software, to configure it appropriately, and to manage it on an ongoing basis. It then explores a subset of legal safeguards looking at the inadequacies of Australian law, and the legal system to protect consumers who bank online with Internet-connected devices. The authors argue that there should not be a shift in the allocation of liability for unauthorized banking transactions. Emphasis should, instead, be placed on more practical approaches to the problem.

1. Introduction

Australia once boasted one of the most enlightened consumer protection regimes in the world. At federal level, the last decade has seen substantial winding back of protections, with the regulation of corporate behaviour in relation to consumers to a considerable extent replaced by the largely vacuous notion of ‘self-regulation’.

In the payments area, however, the Electronic Funds Transfer Code of Conduct (EFT Code) has for many years provided crucial protections to consumers in the area of online purchasing. It was established in 1986 to address issues about ATM usage. It was expanded in the 1990s to apply to payments by means of cards at EFT/POS terminals at the physical point of sale in stores, and later to payment arranged remotely by means of card-details keyed into web-forms. Although nominally ‘a voluntary industry code of practice’, in practice financial institutions have little option but to abide by it, and it can be most readily described as a form of ‘co-regulation’ rather than ‘self-regulation’. This applies only to regulated financial institutions, however, and the increasingly rich array of alternative payment mechanisms (such as eBay's PayPal) are not subject to such codes.

The current version of the EFT Code, dated 18 March 2002, is presently under review. The Australian Securities and Investments Commission (ASIC) released a consultation paper on 12 January 2007, seeking responses by the inauspicious date of Friday 13 April 2007. ASIC received a number of submissions strongly opposed to changing allocation liability for security breaches arising from consumer devices. ASIC is now in the process of re-drafting the EFT Code. It is unknown whether the re-drafted version will contain similar security clauses.

Payment transactions are increasingly being conducted on the Internet. Moreover, they are being undertaken using a wide range of consumer devices. Consumers presume that the present protections translate into the new contexts. As the EFT Code currently stands, its scope is defined by the expression 'electronic equipment', which would appear to make it automatically extensible to new forms of consumer devices. The ASIC discussion paper does not suggest any reduction in this aspect of the Code’s scope.

Many financial associations are looking to modify existing allocation of liability between financial institutions and their customers in light of the surge of unauthorised electronic banking transactions. The New Zealand Code of Banking Practice was modified in 2007 establishing a new liability regime for unauthorised banking transactions. New Zealand financial institutions will now only be liable for direct loss due to a breach of security in their internal Internet Banking system resulting from failing to take reasonable care. The financial institutions will no longer be liable for losses incurred through security breaches in consumer devices.

The Australian EFT Code is currently under review. As part of that process, corporations are seeking to significantly reduce the consumer protections that the Code currently affords. In particular, corporations want to shift liability for financial loss from the corporation to the consumer where devices are insufficiently secure. They also want to make consumers liable for losses caused by consumer devices infected with malware. An example of how such losses can arise is where ‘malicious software’ running in a PC that a consumer uses for a financial transaction captures the user's password and/or PIN and thereby enables an identity fraud to be performed. Their viewpoint has found form in Q28 of the ASIC discussion paper:

Should account holders be exposed to any additional liability under cl 5 for unauthorised transaction losses resulting from malicious software attacks on their electronic equipment if their equipment does not meet minimum security requirements?

The document is highly unclear as to what is meant by does not meet minimum security requirements’. In addition to the term ‘minimum’, the document also uses the terms ‘adequate’ security and ‘reasonable’ safeguards to secure the device. Moreover, there is little or no discussion of how such terms would be operationalised, nor about their concrete implications for liability.

This paper examines the scope for consumers to ensure the security of transactions they conduct using consumer devices, particularly those involving payments. It is addressed to executives and policy-makers who have some understanding of the infrastructure and technologies involved, rather than to technical specialists. Technical language is used only to the extent that is necessary in order to achieve sufficient accuracy, and brief explanations are provided for each technical term the first time that it is used.

The paper commences by identifying some of the key characteristics of consumer devices, and describing the approach adopted to the analysis. It then catalogues the threats and vulnerabilities that afflict consumer devices, and the safeguards that can be put into place. The effectiveness and practicality of both the technical and legal safeguards are examined, and policy issues are identified. Alternative approaches are canvassed. The conclusion is reached that it would be inappropriate and counter-productive to impose liability for malfunctions on consumers, but that there would be considerable value in education and software being made available to consumers, and even greater value in imposing responsibility for security on the suppliers of consumer devices.

2. Consumer Devices

The term ‘consumer’ is used broadly in this paper. The first category of person it is intended to apply to is individuals operating in a private capacity, whether for social, economic or other purposes. It therefore extends not only to people at home and at play, but also to operators of unincorporated small businesses, and to employees who use their own device in work-contexts. The second category is individuals who use a device that is provided by the employer but who take personal responsibility for their actions using it. It is in general not intended to encompass use by employees of or agents for incorporated medium and large business enterprises or government agencies, who use an employer-managed device.

In order to convey the breadth of applicability, the term ‘authorised user’ is applied to the individual who owns the device or to whom it is assigned, and who takes responsibility for transactions performed using it.

The term ‘consumer device’ is also used broadly, to encompass all devices used by consumers that contain a processor, operating system and applications, which together provide users with the capacity to participate in transactions with adjacent and remote devices. In 2007, this includes personal computers, both desktops and portables, and a wide variety of ‘handhelds’. Small consumer devices are numerous and diverse, and include mobile phones, personal digital assistants (PDAs) of various kinds, games machines, music-players like the iPod, and ‘converged’/multi-function devices such as the recently-announced Apple iPhone. It is also feasible for processing capabilities to be housed in many other, much smaller packages, such as credit-cards, rings, watches, and RFID tags. In any device, a great number of applications may be installed. This becomes important from legal and evidence perspectives, because a technical vulnerability may be derived from a physical feature in the device, at the point of connection to the device, from the use of particular software, or, all too commonly, from combinations of several such factors.

Most consumer devices are currently conceived as ‘single-user’ devices. By this is meant that, at any one time, only one user at a time can cause the device to perform functions. Multi-user devices, such as those that support web-servers and mail-servers, make functions available to more than one remote device at a time. Such multi-user devices are subject to vulnerabilities additional to those identified in this paper. This document gives no further consideration to multi-user devices, but instead focuses on single-user consumer devices.

Consumer devices may be used entirely standalone, without any form of communication with any other device. Because standalone use does not support the conduct of transactions with other devices and the individuals and organisations that control them, it is not further considered in this paper.

Consumer devices are often enhanced through the installation of additional components, software and/or data. Examples of such enhancements include multiple screens, audio-speakers, extra storage (variously magnetic, optical and solid-state), and PCMCIA or PC cards and ExpressCards, and other attachments inserted into USB and Firewire sockets.

Crucially for the analysis conducted in this paper, consumer devices may also interact with other devices in several ways:

• mediated by the physical transport of storage devices such as magnetic disks, CDs, DVDs, and solid-state tools such as `thumb drives' or `memory sticks';

• using ‘wired’ communications, such as dial-up via modem, alternative uses of telephone lines (such as ADSL) and other kinds of cable (e.g. Ethernet and cable-TV services); and

• using ‘unwired’ communications, such as satellite, cellular mobile designed for telephones (such as GSM/GPRS, CDMA and W-CDMA), cellular services designed for computers (such as Wifi, Wimax and iBurst) and personal networks (such as Bluetooth).

In the layers above the communications infrastructure, the dominant facility is currently the Internet, and particularly the Web overlaid over it. The upper layers are in a state of flux, however. For example, 3G mobile telephony might be maturing into an alternative high-level application infrastructure. Any such emergent services are also within-scope of this analysis.

A consumer device that is connected to a telecommunications infrastructure can perform functions, ultimately in hardware but primarily driven by software. The elements that make up those functions include:

• the receipt of data from the human user, and from other devices both adjacent and remote;

• the processing of available data, in particular to:

o analyse it;

o convert it into some other form;

o ‘render’ it (e.g. display it on a screen, ‘play’ it as sound through speaker, or print it on a printer);

o store it on some storage medium connected to the device; and

• the despatch of data to some other device.

Software may be caused to perform these operations by the action of a human being, typically using a physical or simulated keyboard and/or mouse, but perhaps through voice-activation. There are several other ways, however, including triggering by internal conditions within the device (such as the date or time), and initiation from a remote location by means of messages transmitted over telecommunications infrastructure.

The consumer devices described in this section are subject to many threats and vulnerabilities. The following section describes the research method adopted in this paper in order to evaluate them.

3. Research Method

The conventional computer security model is adopted in this paper. Under this model, threatening events impinge on vulnerabilities to cause harm. Safeguards are used to prevent or ameliorate that harm. More fully:

• a threat is a circumstance that could result in harm, and may be natural, accidental or intentional. A party responsible for an intentional threat is referred to as an attacker;

• a threatening event (e.g. a particular power outage or receipt of an email with an infected file attached to it) is an instance of a generic threat (power outages and email-borne viruses);

• harm is anything that has deleterious consequences, and includes injury to persons, damage to property, financial loss, loss of value of an asset, and loss of reputation and confidence. Harm arises because a threatening event impinges on a vulnerability;

• a vulnerability is a feature or weakness that gives rise to a susceptibility to a threat;

• a safeguard is a measure intended to avoid or reduce vulnerabilities. Safeguards may or may not be effective;

• safeguards may be subject to countermeasures;

• in response to countermeasures, safeguards may be adapted, or new ones instituted. An attack-safeguards-countermeasures cycle may arise, particularly if the rewards for a successful attacker are high.

This paper is concerned with the use of consumer devices to perform transactions that have financial consequences. Relevant categories of harm include the following:

• the acquisition by another party of a consumer’s identifiers and particularly identity authenticators, such that identity fraud can be committed. These include usernames (identifier) plus passwords/PINs/passphrases and private signing keys (identity authenticators), and credit-card details (card-number as identifier, plus the associated identity authenticators - comprising the name on the card, the billing address, the expiry date, and possibly also the ‘authentication code’);

• the initiation of transactions by someone other than an authorised user of the device. Examples include the transfer of funds from an account owned by the individual to an account owned by someone else; payment for goods and services provided to someone else; and the replay of previous transactions recorded in a log-file;

• interference with transactions undertaken by an authorised user. Examples include diversion of a legitimate payment to another account; and diversion of the delivery-point of goods and services for which an order is legitimately placed;

• use of a consumer device as a tool in a fraud perpetrated on another party. An example is the use of a device as a way-station for transaction-laundering.

The body of this paper considers each of a variety of contexts in which consumer devices are subject to threats and vulnerabilities. The first cluster is associated with the physical contexts in which consumer devices are used. The second group is concerned with the operation of the devices themselves, and the third with their use in conjunction with communications facilities. The final group comprises intrusions by attackers, including various forms of malware, and computer hacking.

In each case, consideration is given to vulnerabilities, and to the safeguards that are available. The focus throughout is on the effectiveness of the safeguards, and their practicability for consumers.

4. Vulnerabilities and Safeguards

The purpose of this section is to identify threats and vulnerabilities, and the safeguards that may provide protection against them. The focus is limited to the conduct of transactions that the authorised user did not intend, or the conduct of transactions in a manner materially different from that which the authorised user intended.

The structure adopted reflects the widely varying sources of the threats and vulnerabilities. Some result from the physical context in which the consumer device is used, some from the nature of the device itself, and others from the communications between the consumer device and other devices. A separate section considers active intrusions into the consumer device by attackers.

4.1 The Physical Environment

Problems are considered firstly in terms of the physical surroundings and secondly in terms of the organisational context of their use. The final sub-section addresses social engineering factors, in particular the impact of techniques designed to cajole consumers into divulging information.

The Physical Surroundings

The locations in which devices can be used were once constrained by size, power requirements and network connection requirements. With the majority of consumer devices, those constraints have been overcome, and the physical surroundings are now enormously varied, and include the home, the workplace, other people’s homes, and ‘public places’ of many different kinds.

There are various ways in which consumer devices are capable of being used by some person other than the authorised user. While the authorised user is operating the device, and while it is not in use but securely in that person’s possession, it is difficult for other people to gain access to the controls. At other times, depending on the size of the device and the care taken by its owner, there are various circumstances under which access to it by other individuals may be feasible. Common examples of problems of this kind include fellow householders in the owner's place of abode, friends in social environments, and colleagues in work-environments - and perhaps cleaners, security staff, repairmen and supervisors if the device is left at work.

If the device’s capabilities are abused, the authorised user may or may not be aware of it. Furthermore, there may or may not be a way in which the user, or someone acting on the user’s behalf, may be able to discover that someone else has used it, and, if so, what for.

It is possible to impose physical security measures on the surroundings in which devices are left. Examples include auto-locking doors, unique door-keys, and security cabinets with unique keys. But few consumer devices are subject to such safeguards, because they are expensive, at least inconvenient and in many circumstances impracticable.

It is also possible to impose physical security measures on the devices themselves. Examples include locks and ‘dongles’ (tokens that must be inserted into ports on the device, in the absence of which the device is disabled). But such safeguards are not mainstream in the marketplace, and are expensive, inconvenient and impracticable for consumers. As a result, few consumer devices are subject to them.

A further set of safeguards is commonly referred to as ‘logical’ safeguards, to distinguish them from the physical ones. These include:

• the prevention of any use being made of the device unless the user demonstrates that they know a specific ‘secret’, such as a password, PIN or passphrase. The process of providing such a secret is commonly referred to as 'logging in', and the generic term is ‘identity authentication’;

• the auto-locking of the device after a period of inactivity, typically 10-15 minutes, forcing authentication to be performed again in order to unlock the device;

• stronger forms of identity authentication. One example is periodic demand that the authorised user’s thumb be placed on a reader built into or connected to the device, with the device rendered inoperable if the print does not match sufficiently closely to a pre-recorded image.

Such measures vary greatly in their effectiveness, but many significantly reduce the scope for use of the device by unintended people. They do not reliably prevent it, however, because all such logical security safeguards are subject to countermeasures. For example:

• a password may be discovered by watching someone key it in, or by guessing that it will be the same as the name of the person, their partner or their pet, or by finding it written down somewhere;

• auto-locking can be avoided by ensuring that the device is used sufficiently frequently that a time-out never occurs;

• a copy of the authorised user's thumbprint can be acquired, and an ‘artefact’ (such as a latex overlay) can be devised that enables an imposter to masquerade as the authorised user.

Generally, consumer devices are subject to little in the way of logical security measures. Most consumers are only vaguely aware of the threats, do not appreciate the harm that could arise if colleagues or visitors use their devices, and few are aware of the available safeguards. The safeguards are in any case generally at least inconvenient and even entirely impracticable, and they may be expensive both to install and to maintain.

The Organisational Context

The strongest forms of protection may be available where the consumer is employed by a corporation that employs or contracts specialists with the capability to support users. Employers may see it as being to their advantage to assist their employees to protect themselves, because they are very likely to conduct company business on the same device. Some consumers will undertake relevant training with their employer, or at least become aware of threats to and vulnerabilities of similar devices used within the employment context and safeguards that their employer applies.

Support of a similar kind may be available from computer clubs, and may be able to be acquired from suppliers of consumer devices and related services, perhaps as an extension to the basic package. Locations that make consumer devices available for free or for fee (such as libraries, Internet cafés, and coffee-shops that operate Wifi 'hot-spots') may provide protections. On the other hand, they may lack protections that the consumer might assume to be in place; and they may even be set up by the operator or a third party to create opportunities for mis-deeds.

The organisational contexts within which consumers work and play are likely to give rise to some degree of awareness and some mitigation of threat. It would be unreasonable, however, to expect other than that a substantial proportion of consumer devices will be largely unprotected.

Social Engineering

Consumers are subject to observation when they use their devices, and this creates vulnerabilities. The most apparent is the use of simple passwords and PINs that are easily inferred from the user's movements. Another is the unintended disclosure that use of the device does not require authentication, or that unlocking of the device is performed by a simple, observable procedure. There appear to be few 'defensive driving' courses available for consumers, so it is to be expected that most users will be vulnerable in this way.

A further threat is conventionally described using the term ‘social engineering. This refers to techniques whereby people can be manipulated into performing desired actions or divulging confidential information. Common examples include gaining the confidence of a person over the counter or over the telephone, and enveigling them into disclosing personal data about a third party.

A primary example of social engineering applied to consumer fraud is the acquisition of the authenticators that the consumer uses when authorising payments. A further example is where users are convinced to change their security settings in order to install or execute a program, undermining safeguards and enabling software to run for malevolent purposes.

An all-too-common application of the general notion of social engineering is ‘phishing’. This technique involves sending a message, commonly an email-message, to the user, that causes them to provide their authenticators to the fraudster, or to use them in such a manner that the fraudster can acquire them. A common approach is to provide a URL, and ask the consumer to visit the site and go through the authentication process. The site is usually masquerading as a real financial institution, typically reproducing the institution's look and feel, but capturing data that should normally only be provided to the institution itself. Various reasons are given to encourage the consumer to divulge the data, such as the need to re-set the consumer's password as a result of the old one being compromised. The technique appears to have yielded significant returns to fraudsters, and many variants and refinements exist.

In ASIC’s discussion paper, Q29 and Q30 consider whether ‘extreme carelessness in responding to a deceptive phishing attack’ should be grounds for imposing increased liability levels on consumers. The questions even contemplate the imposition of high levels of consumer liability even in the absence of ‘extreme carelessness’.

A strong contrary argument exists. Financial institutions have actively promoted telephone and Internet banking. They have done so despite the threats and vulnerabilities that exist. And they have provided their customers with user agreements expressed in legalistic terms, rather than training in safe use of the facilities.

Even after the problems became apparent, public education programs have been inadequate to reduce the incidence with which the seemingly simple stratagem of phishing works. Moreover, few financial institutions have implemented ‘two-factor’ authentication techniques (such as a one-time password communicated to the consumer over a separate channel). The inadequate efforts of financial institutions and governments in addressing the phishing epidemic have led to widespread public dissatisfaction, and threaten public confidence in eBanking.

4.2 The Physical Device

Many vulnerabilities arise from, or in relation to, the consumer device itself. Firstly the hardware and systems software are considered, and then the applications that run over them. Separate consideration is given to the functions that the device performs, the installation of new software, and the means whereby software is activated.

4.2.1 Hardware and Systems Software

The intention of the designers of consumer devices is to create a highly functional device that is attractive to the target market, but inexpensive. Many of the physical components used in their manufacture are cheap commodities. Additional weaknesses derive from the generic nature of the architecture within which the components are placed, and the lack of a comprehensive security strategy. As a result, consumer devices are not intrinsically secure, and they omit features that would be needed to enable them to be converted into secure devices.

At the level of the operating system, security has been a theoretical topic for decades, but there has been little practical outcome. Moreover, consumer devices depend variously on commodity operating systems and on cut-down versions of operating systems that were originally designed for more powerful desktop and laptop machines.

Security concerns exist in relation to the Linux and Macintosh operating systems (which is also Unix-based). The various Microsoft operating systems that are used on the large majority of consumer devices, on the other hand, have always been inherently insecure, and the company's recent commitment to reduce the insecurity of its systems appears to be only slowly bearing fruit. The origins of these problems include a low level of quality in design and coding, and inadequacies in quality assurance. A variety of vulnerabilities result, such as ‘buffer overflows’. These create many opportunities for 'hackers', discussed below, which are widely exploited.

There is only a limited amount that a consumer can do about the many vulnerabilities arising from this cluster of quality inadequacies, and the many attacks that exploit those vulnerabilities. Operating systems providers generally issue upgraded versions and ‘patches’. Their release is often forced, because the vulnerability has become known and reports have been published by organisations such as CERTs including AusCERT, and commercial security firms.

However, in order to overcome each set of vulnerabilities, consumers are generally forced to accept everything else that comes with the bundle. This may include undesirable features such as ‘bloat’ (i.e. significantly increased memory requirements, with implications for the device's speed of operation) and ‘spyware’ (discussed later). Moreover, the supplier may seek to impose on the `locked-in' customer licence terms that are yet more onerous than those originally applied.

Software that was poorly-designed in the first place is inherently complex, difficult to understand, and very challenging to reliably amend, especially in a hurry. So patches rushed out to address a newly-publicised vulnerability often also contain new vulnerabilities.

4.2.2 Applications

The applications that are run on consumer devices are in many cases insecure, and in some cases extremely insecure. As with systems software, many applications exhibit low-quality design, coding and quality-assurance measures. Insecure programming languages have also contributed greatly to the problem. Many applications, when they crash, create vulnerabilities that 'hackers' can utilise.

Email-clients and particularly Instant Messaging (IM) clients are of concern, but Web-browsers are an especially easy target for attackers. Most versions of the most commonly-used web-browser, Microsoft Internet Explorer (MSIE) have been highly insecure, not least because of the default settings of a variety of parameters. The most recent versions of MSIE appear to have been improved in a number of ways, but those improvements are swamped by other factors discussed below.

All browsers, by intent of their designers, deal relatively openly with remote devices that comply with the HTTP protocol. A first vulnerability is the so-called ‘cookie’ feature, whereby remote devices can instruct browsers to store data, and to send that data with subsequent requests to web-servers. Most uses of cookies breach the IETF Best Practices Guide to the use of Cookies (RFC 2964), and many of those uses unintentionally, and in some cases intentionally, create vulnerabilities.

Most browsers permit the download of additional software modules variously called ‘helper applications’ and ‘plug-ins’. Most also support a particular programming language commonly called JavaScript (but more generically and correctly referred to as ECMAScript). Many of the HTML files delivered from web-servers to web-browsers may contain code expressed in that language. The language is claimed to be reasonably limited in its functionality, but rumours of vulnerabilities emerge from time to time. An unrelated programming language called Java is also available, which is much more powerful than JavaScript/ECMAScript. It is restricted to a ‘sandbox’ and hence the extent to which it can be used to develop attacks on the consumer device is limited.

The design of MSIE, however, directly results in consumer devices being insecure. It supports software components usually referred to as ‘Active X controls’, which are not limited to a ‘sandbox’ (as Java applications are), but have essentially unfettered access to the complete consumer device. Hence the operating environment is utterly permissive of software that is delivered to it. Non-MSIE browsers may share MSIE's designed-in insecurity, to the extent that they support the same capabilities. Although Active X controls require user affirmation, comprehension by consumers of how their device may be affected is generally very limited.

A recent development is an extension to the Web protocol called XMLHttpRequest. This was originally devised by Microsoft but has since been widely adopted. It extends the capabilities available to programmers, and reduces the extent to which the user does, or even can, understand what their device is doing. A family of development techniques referred to as AJAX takes advantage of this extended, more powerful Web protocol.

The AJAX approach enables closer control by the programmer of the user's visual experience, because small parts of the display can be changed, without the jolt of an intervening blank window. This is achieved by constructing an ‘Ajax engine’ within the browser, to intercept traffic to and from the web-server. Control of the browser-window by code delivered by an application running on the server represents subversion of the concept of the Web and hijack of the functions of the browser. The power it offers provides programmers with the capacity to manipulate consumer devices. It is a boon for attackers.

Some safeguards are available to consumers. Cookies may be blocked, or managed, although the tools to do so are highly varied in their approach, difficult to understand, and in many cases inadequate. Javascript may be switched off; but a great many web-sites will not function if it is, most web-sites fail to detect and report to the consumer that they will not function correctly, and very few provide alternative ways of delivering the functionality. Many web-sites fail to communicate any of this to consumers. But consumers face significant disincentives if they turn JavaScript off, because many services become unusable. Hence it is unlikely that many would turn it off even if they appreciated that a number of vulnerabilities (not to mention many consumer-annoyance features) can be avoided by doing so.

Java can also be turned off, but similarly some sites will not function, and in many cases they fail to detect and report to the consumer that that is the case. Because Java is limited to a sandbox, it does not appear that leaving it turned on directly creates many security vulnerabilities. On the other hand, it is a complex programming language that is too challenging for a great many programmers, and bugs and browser-crashes are common, which may result in vulnerabilities in some circumstances.

ActiveX also may be switched off (although on at least some versions of MSIE it appears that it requires five options to be disabled . It may also be enabled for `trusted sites' and disabled for all others; but the option is nested five levels down a complex menu-tree, and it is unlikely that many consumers even find the function, let alone understand it. As with Javascript and Java, if settings are adjusted to prevent ActiveX controls from running, many sites will not function, or will not function as the designer intended. Most consumers are oblivious to the existence of these facilities, let alone the dangers they embody, and the opportunity to avoid those dangers by sacrificing some of the experiences that the Web offers them.

It appears that even highly technically literate consumers may be either unable to preclude AJAX techniques from intruding into their devices, or unable to do so without abandoning access to a wide range of services. In particular, there appears to be no convenient, consumer-understandable way in which AJAX techniques can be permitted under specific circumstances only (such as from a known and trusted supplier like their bank) without the device being open to all comers.

The alternative of using ancient browser-versions, or intentionally cut-down browsers that do not support key features on which AJAX depends, incurs considerable disadvantages. Most web-site developers design applications only to run on very recent browser-versions (or, in remarkably many instances, only on the most recent versions of MSIE). Old and cut-down versions of browsers therefore quickly become unusable on many sites; and hence there is a built-in and powerful disincentive working against consumers using less vulnerable browsers.

Consumers might reasonably expect that computer crimes legislation would make such abuses of their devices unlawful. As discussed in section 5.2 below, however, that expectation is not fulfilled.

In short:

• browser-based applications are extremely vulnerable;

• browser-based applications are extremely vulnerable by design;

• there is little that consumers can do about these vulnerabilities, because:

o in order to avoid them, a consumer would need to deny all of the insecure features (cookies, Javascript, ActiveX and Java), or use a web-browser that ignores them;

o by doing so, consumers would have to forego many features on many sites; and

o many transaction-based sites use those capabilities, and hence people adopting those strategies in effect preclude themselves from conducting transactions and making payments on the Internet.

Expressed differently, many eCommerce and even eBanking services only work because they exploit vulnerabilities on consumer devices.

4.2.3 The Functions Performed by the Device

Each consumer has an understanding of what their devices do. That understanding is based on representations made by the providers of the device and software running on it, information provided by other consumers and the media, their own experience of their devices' behaviour, and sometimes even documentation if it is made available by suppliers and read by the consumer.

Representations, reputation and experience are not comprehensive, and consumer devices perform many functions that authorised users are not aware of. Hence consumers have at best only a very partial understanding of the functions performed by their devices.

Moreover, some functions are designed by the providers of the software to be hidden, and to be difficult to discover. Common instances of this include:

• attempts by providers to infiltrate consumer devices for such reasons as surreptitious advertising, consumer profile construction, and financial fraud. This is discussed in greater detail in the following sections;

• attempts by providers to build into programs the ability to detect unlicensed use of it (e.g. by having software ‘report home to base’, including data about the device on which it is being run and/or the person responsible for the device);

• attempts by providers to build into programs the ability to detect unlicensed use of other software and data.

Because software performs such additional functions, consumer devices may participate in transactions that the authorised user did not intend, or that are different from what they intended. The installed software may perform functions autonomously, or it may be triggered by some external stimulus. Moreover, this may even occur without the user knowing that it is happening, or that it has happened. The software may be designed to be surreptitious, by minimising the extent to which the fact that a transaction has occurred can be detected through examination of data-logs. Despite these serious vulnerabilities, it would appear that ASIC, on behalf of corporations, is contemplating imposing ‘additional liability under cl 5 for unauthorised transaction losses resulting from malicious software attacks on their electronic equipment if their equipment does not meet minimum security requirements’ (Q28).

In order to be protected against such eventualities, consumers would firstly need to invest effort to understand the complete set of documented functions of every item of software running on their devices. Secondly, they would need assurance that the software contained no undocumented functions. Very few consumers are capable of performing an audit of executable code. Indeed, such an audit is extremely complex and challenging, and any such service from an independent third party would be expensive, and the level of assurance provided (as indicated by the limited warranty that would be offered) would not be high.

A more reliable form of assurance would be certification by an independent third party based on inspection of and experimentation with the source code rather than the executable code. It is likely that most software providers would be unwilling to submit their code for inspection in such a manner, and it appears that few such inspections are performed, even for the business market, let alone for consumer products.

The consumer could seek certification from each software supplier about the functions that the software performs, and the security features it embodies. Further, the consumer could seek warranties and indemnities from each software supplier. In practice, however, consumers lack the market power to make suppliers do such things. In any case, unless carefully designed, such mechanisms would be cumbersome, and would work against widespread adoption of eCommerce. Instead, consumers are generally forced to accept software without certification, and without significant warranties and indemnities, and indeed with onerous obligations unilaterally imposed on them and expressed in complex and aggressive terms.

Worse, trade practices regimes impose very limited responsibilities on software suppliers, and provide very weak protections to consumers. As a result, it is not clear that any software suppliers at all provide certification, nor any material warranties or indemnities about the functions their software performs.

A form of ex post facto control could be implemented, by logging the traffic generated by the device, and comparing it with a model of the traffic that was expected given the consumers' actions. Little software appears to be available that provides such controls. Developing, installing, configuring and operating such tools would be difficult enough for experienced professionals, and the challenges involved far exceed the capabilities of the vast majority of consumers. Furthermore, such forms of testing might even be illegal, by virtue of constraints on reverse engineering embodied in extensions of copyright law enacted in recent years for the benefit of copyright-owners.

A mechanism exists that is commonly referred to as ‘code signing’. This enables a software provider to digitally sign software that it distributes. If a consumer can acquire the relevant public signing key through some reliable channel, and confirm that the digital signature is valid, then there are two useful inferences: firstly that the software was signed by the organisation that claims to have signed it; and secondly that it arrived in exactly the same form as it was despatched. The code-signing approach therefore addresses two relatively minor risks in relation to software distribution (that it was created by someone other than the organisation that it is meant to have come from, and that it may have been changed in transit).

But code signing does nothing to address the vital question as to whether the software performs any functions that the consumer did not expect, and in any case the warranties offered by certificate authorities are so small as to be essentially valueless.

4.2.4 Software Installation

The preceding sub-sections have considered the functions performed by systems software and applications that are already on consumer devices. Threats and vulnerabilities need to be considered that arise at the time that software is installed.

Reference was made earlier in this paper to the social engineering mechanism of enveigling users into de-activating safeguards in order to enable the attacker to go about their business more easily. Vulnerabilities of this kind can arise even without an attacker involved. It is quite commonly necessary for safeguards to be circumvented in order that desired software can be successfully installed. The vulnerability may become long-term if the security settings are not adjusted back to their normal level immediately after the installation is conducted. And if the settings apply to all processes running in the device rather than only to the approved installation process, a vulnerability exists at the very least for the duration of the installation activity.

When confronted with a security alert warning about such vulnerabilities, the user's understanding is commonly limited, and their options are commonly restricted to ‘permit’ or ‘deny’. Where a ‘learn more’ option is available, it often delivers statements about unknown certificate authorities or security parameters, which are expressed in a manner that is at least daunting, and often simply incomprehensible. Hence the extent to which consent is `informed' is in considerable doubt.

A rational risk assessment process would lead a consumer to distinguish between different categories of activity, in particular:

• consumer-pull download and installation activities, where the user has made a conscious decision to go to a more or less trusted site and fetch a program;

• remote-site driven/’push’ installation activities, where the user has a conscious expectation that additional software may be needed (e.g. auto-patching of software, where the person is aware that they have a subscription to that supplier and the security alert looks familiar); and

• remote-site driven/‘push’ installation activities, where the user did not have a conscious expectation that additional software would be needed.

Unfortunately, it is likely that only a small percentage of consumers would be even vaguely aware of these categories, and an ever smaller percentage could distinguish the different risk profiles that each presents.

In section 4.2(2) above, reference was made to a variety of circumstances in which consumers are not even made aware that software has been loaded onto their machine. In the case of ActiveX controls, the lack of a sandbox suggests that software that is ostensibly delivered for a single specific transaction may be able to be permanently installed, and in such a manner that it is generally available rather than limited to a specific context.

4.2.5 Software Activation

In order for a software function to be performed, the relevant software has to be invoked, executed or activated. There are several ways in which this can come about, including:

• by an action of the authorised user of the device:

o using something that forms part of the device, such as a keyboard and/or mouse; or

o remotely;

• by an action of some person other than the authorised user of the device:

o using something that forms part of the device, such as a keyboard and/or mouse; or

o remotely;

• because of the action of some logic in some other software running on the device. An example of this is a timed action (e.g. 'run this backup program at midnight each evening');

• by automated invocation of the software (e.g. every time the device is initialised). Such software is commonly referred to as a ‘daemon’ or a ‘Windows service’;

• as a result of a request by the consumer device to a remote device, which results in software being downloaded and executed. A primary example of this is a request from a web-browser to a web-server for the file that is stored at a particular URL. If this file contains software (in a form such as Javascript, Java code or a Microsoft ActiveX ‘control’, i.e. program), then the software is automatically executed on arrival at the consumer device, unless (as discussed in the previous sub-sections) settings in the device block its execution, or make its execution dependent on the user providing express approval;

• by the action of a remote device. This is referred to as a ‘push’ mechanism, to distinguish it from the ‘pull’ mechanism described in the previous bullet-point. A very important example of this is a file attached to an email, which is automatically received and stored, without any action by the user other than the periodic down-loading of the contents of the mailbox, and which may (depending on the settings in the email client-software) be automatically invoked.

Versions of Microsoft browsers (MSIE) and email-clients (Outlook) were for many years distributed in insecure form, such that they permitted any form of file to be invoked on arrival in the client-device. They were intrinsically permissive and hence dangerous. The most recent versions have been distributed with less permissive defaults. However:

• some consumers may change the parameters to an unsafe setting without realising that their action applies to all subsequent activities, or without appreciating the risks that the change entails;

• the settings may be able to be changed by malware;

• some browsers, in particular older versions of MSIE, automatically permit programs to run if they are received from a web-server in response to a request from a web-browser, unless a relevant setting is switched off. There is no test to see whether the client was expecting just a web-page display rather than active software.

Consumers who are educated about the risks involved in using their devices, who are well-informed about the specific features of their web-browser and email-client, and who take care to initialise and maintain all of their software parameters at a safe setting, can generally preclude emailed files from executing on their devices. The proportion of consumers who satisfy those conditions, and who sustain their vigilance at all times, is, however, not likely to be high.

Some limited protections are possible in relation to cookies, Javascript, Java and ActiveX, but only if the consumer is:

• sufficiently well-educated to be able to understand technical features, but also the legal dialects in which software licences are expressed;

• careful;

• sufficiently unhurried to read software licences, give proper consideration to security alerts, and if necessary conduct further investigations;

• willing to forego much of the experience that web-sites offer; and

• willing to be precluded from conducting transactions on many sites that depend on insecure features of web-browsers.

4.3 Communications

A range of vulnerabilities arise from the fact that consumer devices communicate with other devices, and the manner in which they do so. This section considers firstly the partners with whom communications are exchanged, and secondly the flows of messages between them.

4.3.1 Transaction Partners

When conducting a transaction, a consumer needs to have confidence that their device is interacting with the (or an) appropriate device operating on behalf of the (or an) appropriate person or organisation. The term ‘identity authentication’ is commonly used to refer to the checking performed to provide that confidence.

A mechanism is available that provides a form of identity authentication on the Internet. The Secure Sockets Layer (SSL) mechanism, now standardised as Transport Layer Security (TLS), enables any party to digitally sign a message and invite other parties to check the digital signature. Unfortunately the scheme falls far short of providing real confidence in the identity of the other party. The reasons include the low quality of the certificates on which the mechanism depends (as evidenced by the almost complete absence of any meaningful warranties and indemnities), and the low quality of ongoing maintenance of certificate schemes, with many outdated certificates, slow updating of directories, and few implementations of online certificate checking.

The process of checking the identity of organisations and individuals on the Internet is technically challenging and complex. Consumers have little understanding of it (and indeed the significance of the process eludes many postgraduate students). And it is of very low quality in any case. Most consumers put their faith in prior transaction partners, the honesty of other parties or consumer protection laws; or they simply take the risk and hope for the best. This is not an environment that encourages greater use of electronic networks for the conduct of business.

In principle, SSL/TLS enables the server to authenticate the identity of the client (i.e. of the device, or the web-browser, or conceivably the web-browser user). This could be a general scheme, or a specific scheme, in particular one implemented by financial institutions for their customers. In practice, however, this potential is very little used, and such schemes as exist have attracted limited participation. The user's private signing key (which must be stored on and used by the device) is at risk of capture by malware or ‘hacking’, and consumer devices are especially vulnerable.

4.3.2 Data Transmission

The data transmitted during the course of a transaction is vulnerable when it travels over any form of communications link. In particular, it may be lost, may accidentally change or be intentionally changed while in transit, or may be intercepted. If it is intercepted, it may be used as part of an act designed to defraud the consumer or some other party.

Data can be protected in transit by encryption. There needs to be some means of ensuring that the intended recipient, and only the intended recipient, has the means of decrypting the message. A variety of tools exist, implementing a variety of encryption schemes. The most readily available is SSL/TLS, mentioned in the previous section, which is conveniently supported by all mainstream web-browsers.

Encryption of email sent from mainstream email-clients is, on the other hand, very poorly supported. Email-clients that work within email-browsers can readily take advantage of SSL/TLS. In practice, however, only a small proportion of consumers appreciate the importance of doing so. Worse still, some Internet Service Providers fail to switch users to protected mode (referred to as https rather than http protocol). As a result, not only is most email content unprotected from eavesdroppers while in transit, but so too are some email-account passwords, which users transmit when they login to view their incoming messages or to compose and send their own outgoing messages.

A further concern is undesired traffic between the consumer device and other devices. Any Internet-connected device has to have particular ‘ports’ open, on which it will accept messages. These ports are readily discoverable, and can be used by an attacker to probe for security vulnerabilities.

Some degree of protection against these threats can be achieved by utilising a ‘firewall’. For a consumer device, this is software that blocks all traffic except those messages that satisfy particular rules. Although this represents a safeguard against some attacks, it necessarily leaves a great deal open as well.

4.4 Intrusions

The previous sections have considered a wide variety of vulnerabilities that are intrinsic to consumer devices and their use. This section focusses on means whereby attacks can be mounted against consumer devices. The first two groups focus on the various forms of ‘malware’, initially considering the means whereby malware can be infiltrated into consumers' devices, and then the kinds of things that malware can do. The third section is concerned with means whereby other parties can gain remote access to, and operate on, consumer devices as though they had direct physical access to them.

4.4.1 Malware Vectors

The expression ‘malware’ is a useful generic term for a considerable family of software and techniques implemented by means of software, which result in some deleterious and (for the user of the device) unexpected outcome. ASIC's Q28 uses the expression ‘malicious software attacks’, presumably in the same manner in which information technologists use the term 'malware'. A useful general term for the code that performs the harmful function is its ‘payload’. That is discussed in the following sub-section.

Malware comes to be on a consumer device by means of a ‘vector’. One example of a vector is portable storage such as a diskette, CD, DVD or solid-state electronic ‘drive’. Loading files from such media may deliver malware onto the device. So too may file-download from another device on a local area network. Since the mid-1990s, Internet connections have become the most common source of vectors for malware migration onto consumer devices. Connections to other wide area networks, such as those for 3G mobile phones, are likely to become a further source.

A longstanding and very common form of malware is a ‘virus’. This is a block of code that inserts copies of itself into other programs. It arrives on a device when an infected copy of a program is loaded onto it from an external source. If and when the infected program is invoked, in addition to the other functions it performs, the infected program seeks out other executable files and inserts copies of itself in them. In addition to the code that performs the replication function, a virus generally carries a payload, which may be intended to be constructive, to have nuisance value, or to have serious consequences for the device's owner or some other party. To avoid early detection, viruses generally delay the performance of functions other than replication.

Another well-known category of malware is a ‘worm’. A worm is a program that propagates copies of itself over networks. It does not infect other programs. Worms propagate by exploiting the many security vulnerabilities on consumer devices that were referred to in section 4.2 above.

There are many other vectors, including email attachments, web-pages, and files downloaded from instant messaging (IM) services and peer-to-peer (P2P) services. The file that is downloaded does not need to itself be an `executable' (i.e. a program). It may be a data-file in which a segment of executable code is embedded, such as `macros' within text documents, spreadsheets and slide presentations. Recent versions of Microsoft's Office suite provide an even more powerful facility for embedding code in data-files, called Microsoft's Visual Basic for Applications (VBA). Files prepared using the Microsoft Office suite consequently represent a major vector for malware.

Consumers can protect themselves against malware vectors in several ways. One would be to never download software onto the machine. But this is impractical in the extreme. One reason is that it is very difficult to reliably distinguish files containing executable code from data-only files. Another is because software suppliers actively attract people to install new software and new versions of software, and impose active disincentives against continuing to run old versions for extended periods (through removal of support, non-functioning after some ‘drop dead’ date, and ‘planned obsolescence’ such as non-support for old data formats). In any case, this approach does not address the problem of malware present on the device when it is originally acquired.

Another difficulty is that controls over the active ‘pulling’ of software onto the device do not prevent ‘pushing’ of software to the device by other parties. A primary example of ‘pushed’ software is email attachments, which can arrive without the device's user issuing any request for the software. Another crucial example of pushed software is the wide array of code that arrives in response to requests to web-servers. The consumer may think that they are requesting an HTML file that will display in their browser-window; but increasingly that is accompanied by active code that infiltrates the device; and as noted earlier, Microsoft's ActiveX ‘controls’ appear to have largely uncontrolled access to the whole of the device and local storage.

A conventional form of protection is akin to ‘perimeter defence’. This involves running software that is usually referred to as 'virus protection software' or ‘anti-virus software’. This checks incoming files for known instances of malware. There are many such products.

Implementing such products requires understanding, patience and investment. It may need to be acquired, in many cases for a fee, it needs to be installed, it most likely needs to be configured, and running it is an inconvenient overhead that delays the consumer's desired experience. Moreover, installation of such software creates additional vulnerabilities, as discussed in sub-section 4.2(4) above. In addition, because malware is in a state of continual adaptation, such software and the data that supports it requires frequent updating. That is onerous if performed manually; but if the process is automated it creates yet further vulnerabilities.

All such protections are incomplete because there is a lead-time between the creation of new malware, discovery by the suppliers of protection software that it exists, discovery of the malware's ‘signature’ whereby it can be recognised in users’ storage, and distribution of the new data or software version to consumers' devices.

4.4.2 Malware Payloads

The previous sub-section focussed on how malware reaches consumer devices, and what safeguards exist that can prevent that happening. This sub-section considers what malware does once it makes it through the protections and gets itself installed on the device.

The term ‘trojan horse’ or ‘trojan’ refers to a program that purports to perform a useful function (and may do so), but does perform one or more malicious functions. An example is a useful utility (which, for example, helps find lost files, or draws a Christmas Tree that can be sent to friends at the appropriate time of year). If it is a trojan, then it performs some additional function (reminiscent of enemy soldiers carried in a wooden horse’s belly).

One use to which malware is put is to enable a consumer device to be controlled by processes running on some other computer. The term ‘zombie’ is used to refer to a device that has such malware installed on it. This aspect is further discussed in the following sub-section.

Where a malware payload gathers personal data on a consumer device, it is referred to as ‘spyware’. To be effective, such software generally operates surreptitiously, and without informed consent. Examples include code intended to assist corporations to monitor the use of copyright works that they own (such as software, images, music and videos), and tools that assist in the commission of financial fraud and theft.

Many instances of spyware are created by corporations to enable advertisements to be displayed on consumer devices, preferably ads that will be of interest to the user. The proponents of such software prefer the term ‘adware’, and seek to distinguish ‘adware’ from the broader category of ‘spyware’.

A particular sub-category of malware payload is commonly referred to as a ‘keystroke logger’. The function of this form of malware is to capture as data what the user keys on the keyboard. This may enable the conduct of fraudulent transactions, especially where the data is part of an authentication process, such as a password, PIN or passphrase. A keystroke-logger may also be used for surveillance of the user's activities, for example by the person’s employer, or by a corporation, by a government agency, or by a law enforcement or national security agency.

A further category of malware payload comprises tools to facilitate remote use of the device by another party. A legitimate form is so-called ‘remote administration software’, such as Microsoft Back Office and Apple Remote Desktop. These enable users to be provided with technical support without the user, the device and the technician having to be in the same place at the same time. An example of a tool that mimics remote administration software but is used by unauthorised parties is Back Orifice. The use of such malware payloads is discussed in the following sub-section.

Safeguards exist against malware payloads that have already successfully infiltrated the device. The ‘virus detection’ software described in the previous section can be run periodically, to 'audit' the software that is installed on the device. For this to be effective, however, the protection software and the data that supports it need to be updated continually, or at least from time to time. Viruses and worms may get through the perimeter protection in the first instance, because their signature, or even their very existence, are not yet known; but some time later they become recognisable and can be detected and removed. However, it may also be necessary to run additional anti-spyware software. This caters for software that arrives through vectors that are not monitored by the ‘virus protection software’. Implementing such protections requires understanding, patience and investment by the consumer, because running such software is an inconvenient overhead.

Safeguards of all kinds are subject to countermeasures. This is particularly apparent in the context of malware, where there is a running battle between malware producers and the providers of safeguards. A further relevant category of malware payload is referred to as ‘Rookits’. These are tools that help conceal the presence of software and files on the device. They thereby assist the remote user to escape detection. In section 4.2(4) above, attention was drawn to the risk involved in software installation. Because anti-virus and anti-spyware tools attract some degree of trust from consumers, they are also used as vectors to infiltrate malware onto consumer devices.

4.4.3 'Hacking'

The term ‘hacking’ is commonly applied to the operation of a device by a remote user without the authority of the local user. Other (and preferable) terms for this are ‘break-in’ and ‘cracking’ (as of a safe).

There are myriad opportunities for crackers, because of the inherent insecurity of operating systems and applications described in section 4.2 above. Of particular relevance are the highly permissive nature of many default settings, and the desire of software developers to have unfettered access to consumer devices in order to enhance ‘the user experience’, market their own and other parties’ products, and exercise control over the use of their own and other parties' software and data.

There are readily-accessible libraries of recipes on how to conduct ‘hacking’. Many of the techniques have been productised in the form of ‘scripts’. The people who perform hacking require a moderate amount of skill, but they do not need to be experts and are sometimes referred to by the derogatory term ‘script kiddie’.

In addition, hacking may be made easy through the existence of a ‘backdoor’. This term refers to any planned means whereby a person can surreptitiously gain unauthorised access to a remote device. Some may be intrinsic to the software installed on the device before it is delivered to the consumer, whereas others are infiltrated into the device at a later stage. Examples include remote administration software referred to in the previous sub-section and intended to enable maintenance programmers to gain access, trojans infiltrated by means of worms, and features added into a program by viruses.

When a device has been hacked into, a remote user is able to operate the device as though they were the local user. The capabilities available may be somewhat restricted, or may be the same as those available to the local user. A hacker generally has reasonable technical competence, and hence knows enough to be able to do far more than most users can do with their own machine. In particular, a hacker may be able to upgrade themselves from the restricted capabilities of a normal user to the full set of privileges to operate the device that are available to a ‘super user’, ‘administrator’ or ‘root’.

A hacker who has cracked a device is in a position to run software that observes the conduct of financial transactions on the consumer device, and hence to capture identifiers and authenticators. In some circumstances, a hacker may be able to cause transactions to be conducted, e.g. to authorise transfer of funds under the control of the consumer to an account under the control of the hacker.

When a device is subject to automated remote control, it is referred to as a ‘zombie’, ‘robot’ or ‘bot’. A collection of such machines is referred to as a ‘botnet’. Botnets have been used to perform attacks on other computers (referred to as ‘distributed denial of service’ or DDOS), and to relay spam. It has been estimated that a large proportion of Internet-connected devices are zombies.

Zombies could conceivably be used as part of a financial fraud, e.g. to effect ‘transaction laundering’ by shifting funds through a succession of accounts controlled by consumers in different jurisdictions, thereby obfuscating the origin and/or eventual destination of the funds. A further application that has been speculated upon but does not yet appear to have been publicly demonstrated is market price manipulation, e.g. of shares traded on exchanges.

Some limited protections are available to consumers. They can inform themselves about the security settings of the operating system, systems software, and each application running on their device. However, there are scores of software items whose settings need to be controlled, the settings are complex and highly diverse, and the locations where the settings can be changed and the documentation relating to them are in many cases very obscure. In practice, few users are even aware of the problems, let alone capable of learning how to adjust their devices to be less vulnerable.

In section 4.3(2) above, mention was made of the availability of 'firewalls' as a means of preventing some forms of traffic. These can also deny some of the means whereby hackers can gain access to the device. Doing so requires understanding, effort and assiduousness on the part of the consumer. In any case, skilled attackers continue to have many avenues available whereby they can penetrate a consumer device.

As mentioned in section 4.2(1) above, suppliers of systems software and applications issue 'patches' from time to time, which address vulnerabilities that have come to light, usually as a result of poor design and programming. Consumers can implement those 'patches' in order to block off some of the vulnerabilities on their machines that make them particularly susceptible to hacking.

Unfortunately, there are serious disincentives that militate against consumers actually doing so. Suppliers commonly only patch very recent versions of their products. That forces many consumers to upgrade the version they are running in order to have access to the patch. That may cost a considerable amount of money, and require considerable effort and delay.

It may be actually undesirable to upgrade to a new version, and even highly so. In many cases, the only reason for applying the update is to address the security weakness in the original version of the product. But with the new version comes very probably ‘bloat’, and probably slow-down associated with the bloat. In addition, many new versions include spyware installed by the supplier to serve their own ends. They may also include additional, non-negotiable licensing terms that are unacceptable to the consumer. These features arise with many suppliers, but are particularly common with Microsoft products, which afflict many consumers, variously at the operating system level, and in the key applications of office tools, web-browsers and email-clients.

It may even not be possible to upgrade a consumer device to a new version of systems or application software. Because later versions of software are typically bloated with a great many new, inefficient and mostly unwanted features, they may not run on the consumer's device without hardware enhancement. That involves additional cost and inconvenience. But some consumer devices, particularly smaller handhelds, may not be capable of being enhanced in the necessary manner, because their ‘form-factor’ is inherently greatly constrained, and there may be no port to plug an enhancement into, or no space inside the housing.

There are, in short, many barriers and disincentives that work against the widespread implementation of safeguards against ‘hacking’.

5. The Effectiveness of Available Safeguards

This section considers the reasonableness of the proposition contained in the EFTS Code consultation paper of January 2007 to the effect that a duty of care should be imposed on consumers, formulated variously as ‘minimum’, ‘adequate’ or ‘reasonable’ standards. Technical safeguards are addressed, then legal safeguards, and finally the implications of the analysis for the EFT Code.

5.1 Technical Safeguards

The preceding sections have demonstrated that a vast array of threats exist, that they impinge on a vast array of vulnerabilities, and that the vulnerabilities are deep-seated in the device architecture, the systems software, the languages in which systems software and applications are developed, the applications, and the development practices of software suppliers.

Safeguards are available that address some of the threats and vulnerabilities. However, these safeguards:

• are numerous;

• have to be acquired from a range of separate sources;

• are not integrated with one another;

• are not integrated with the consumer device's systems software and applications;

• require considerable technical competence to understand;

• may cost money;

• cost time and effort;

• are difficult to install;

• are difficult to configure; and

• are difficult to maintain.

In order to take advantage of each particular safeguard, the user must do the following:

• acquire and maintain expertise, in order to:

o know about the threats and vulnerabilities, and about what the safeguard does and does not do about them;

o acquire, install, configure and maintain it;

o appreciate what deleterious effects the safeguard may have on their device (such as slower operation, inability to access some sites, and the creation of additional vulnerabilities);

• invest time, effort and money to acquire, install, configure and maintain it.

Worse still, after the consumer has gone to all of that trouble, the safeguards are of limited effectiveness, because:

• they are far from comprehensive;

• they require considerable and ongoing assiduousness on the part of the user;

• they create additional vulnerabilities;

• they are subject to countermeasures by attackers;

• they are unable to deal with new categories of attack, and new instances of existing categories of attack, until after they have been identified, and hence they have to be continually, and even continuously, updated.

Moreover, even a well-protected consumer device still has a wide array of vulnerabilities, many of them known to attackers. This applies in particular to the intrinsic vulnerabilities outlined in section 4.2, many of which were intentionally designed-in by suppliers, and to undefendable malware and hacking attacks outlined in section 4.4. There are also considerable exposures arising from the social engineering attacks referred to in section 4.1(3), particularly when skilfully combined with the data communications insecurities described in section 4.3.

5.2 Legal Safeguards

A legal framework communicates rights and obligations, encourages players to behave responsibly, and acts as a deterrent against irresponsible behaviour. It also creates the possibility of back-end controls, in the form of sanctions against organisations and individuals that misbehave, or at least opprobrium from `naming and shaming'. Hence, in theory at least, the law could be an effective safeguard for parties affected by malware and other forms of unauthorised banking transactions. In practice, however, there are a number of legal and evidentiary issues making the law more of an obstacle to hurdle than a safeguard. The following section will demonstrate how the law fails to provide an effective safeguard in relation to the security of consumer devices.

The following table provides a legal framework for unauthorised banking transactions arising from insecure consumer devices activity. The table considers device insecurity as being caused from a variety of methods including, for example, malware and fraud through social engineering. The vertical columns represent all potential parties in possible legal avenues. Applicable laws and challenges are tabled horizontally for each party.

Larger themes arising from the various intersections of the table will be explored in the following sections and include: cybercrime, consumer protection, banking and generic challenges for high tech crime.

Table: Unauthorised Banking Transactions Legal Framework


Perpetrator(s) of Criminal Actions
Financial Institution
Consumer Device (Eg. Software) Vendor
Customer
Applicable Laws
The Crimes Legislation Amendment (Telecommunications Offences and Other Measures) Act (No 2) (Cth)
Privacy Act (Cth)
Customs Act (Cth)
Cybercrime Act (Cth); and
Security Legislation Amendment (Terorism) Act (Cth)
Part 10.7 Computer Offences Criminal Code Act (Cth)
Pt 10.6 Telecommunications Offences Criminal Code Act (Cth)
Pt 10.8 Financial Information Offences
Criminal Code Act (Cth)
Pt 7.3 Fraudulent Conduct Criminal Code Act (Cth)
Pt 5.3 Terrorism
Criminal Code Act (Cth)
Various similar State legislative measures
EFT Code
Clause 5 Unauthorised Transactions
Clause 6 System or Equipment Malfunction
Clause 8 Networking Arrangement
Code of Banking Practice
Privacy Act (Cth)
Trade Practices Act (Cth)
S. 52 Misleading conduct and deceptive conduct
Tort of Negligence
Sale of Goods Act(s)
Trade Practices Act (Cth)
S. 52 Misleading conduct and deceptive conduct
S. 53 Misrepresentations
Tort of Negligence
EFT Code
Clause 5 Unauthorised Transactions
Clause 6 System or Equipment Malfunction
Clause 8 Networking Arrangement
Code of Banking Practice
Banking Ombudsman
Tort of Negligence
Challenges to Legal Framework
Burden of Proof
Jurisdiction (international component)
Digital Forensics Expertise Necessary
High threshold for quality of evidence
Defendants often not adults
Complexity of technical issues difficult for judges and juries to comprehend
Criminals often located in safe havens
Burden of Proof
Digital Forensics Expertise Necessary
Threshold evidence quality
Product liability legislation applicable to goods only – software potentially not a good
Threshold of evidence quality
Burden of Proof
Litigation Cost
Digital Forensics Expertise Necessary
Threshold of evidence quality

5.2.1 Cybercrime

This section transports itself into a fantasy world where jurisdictional, and evidentiary problems are sufficiently resolved to allow for criminal prosecution of those responsible for unauthorised banking transactions. As such, it represents a mere survey of applicable Australian criminal law regarding unauthorised online banking transactions. By no means do the authors present the criminal law (or the civil law) system as an effective safeguard to computer security.

Most unauthorised banking transactions and many security breaches constitute offences under existing Australian laws. As a federation, the Australian framework is complicated with both federal and state/territory criminal legislation in the area. In an attempt to harmonize existing computer crimes offences, the Attorney-General issued a Model Criminal Code (Model Code). This Model Code led to the passing of a number of amendments to the Crimes Act 1914 (Cth). New South Wales, the Australian Capital Territory, Victoria, the Northern Territory and South Australia all contain similar provisions based on the Model Code. Tasmania, Queensland, and Western Australia remain the only states to yet implement the Model Code.

Foremost change to the law is the enactment of the Cybercrime Act 2001 (Cth). This federal legislation amended the Criminal Code Act 1995 (Cth), the Crimes Act 1914 (Cth) and the Customs Act 1901 (Cth). The most relevant parts of the federal Criminal Code Act (Cth) are found in Part 10.6 which looks at offences committed through telecommunication services, Part 10.7 which broadly addresses computer offences, Part 7.3 which deals with fraudulent conduct, and Part 10.8 which deals expressly with financial information offences. Relevant provisions from both the Criminal Code Act (Cth) as well as those found in many State criminal codes address actions which could be categorised broadly as data misuse. Such actions might include unauthorised access, impairment and modification of data or electronic communications, and dishonest use of personal information. Whether an offence is committed generally depends on whether the person had the intent to commit an offence or to cause harm, but in some cases recklessness is sufficient.

Who is caught under data misuse legislation? The application of data misuse provisions bestow a mockery (like many cyber crime provisions). On the one hand, data misuse provisions are written in a sufficiently broad manner to capture the activities of a chain of information exchange leading to a security breach. At the same time, there are often no defences to data misuse offences. It is generally thought that broad wording allows sufficient flexibility to reduce legal loopholes. So, for example, someone who without permission, hacks into a system to perform security threat testing may be captured by the provisions. On the other, criminal data misuse provisions potentially apply to everyone and everything in the security chain, but some have noted that in practice the provisions are merely symbolic. Prosecuting an anonymous 16 year old computer student (perhaps contracted by the mafia) located in a former Soviet bloc country for writing two lines of computer code which override a buffer overflow is not merely difficult, it is virtually impossible.

The practice of many ‘legitimate corporations’ could also arguably fall within the parameters of criminal data misuse. For example, web-sites that install rootkits and backdoors, and that use invasive programming techniques without user authorisation are quite probably in breach of the criminal law. In perhaps the most well known incident to date, Sony BMG was forced to temporarily suspend its copy protection technology on many of its distributed music CDs because this copy protection measure secretly installed spyware without user authorization and consent. The spyware was hidden in what is known as a rootkit – an invisible folder placed on a person’s computer. The spyware could monitor and track the use of the album which left computer systems vulnerable to hackers who could install a number of security-compromising malware. Class action suits were launched in a multitude of American States, as well as in Canada and parts of Europe (see Sony BMG Settlements). Many corporations now release products with an end-user license term authorizing them to utilize a rootkit/backdoor for a variety of unspecified purposes, all of which may be subject to change without notification to the user. The provisions of such EULAs are often vague, difficult to understand and even misleading. Hence consumers have not provided the necessary 'informed consent'. In other instances, as was the case with Sony BMG, no information about a rootkit or spyware was found in its EULA. No arguments were required as to whether the legal test of ‘informed consent’ was made as the invasive technologies were installed in secret. There is simply no political will to prosecute ‘legitimate corporations’ for even clear instances of unauthorised access to data.

For a variety of reasons, very few cases are prosecuted. Cybercrimes are rarely reported. Many cybercrimes are committed by people in distant jurisdictions. International cooperation is required, but is difficult to get, and very slow. The gathering of evidence requires forensic specialists, who are in short supply. Technical complexities abound. The laws relating to digital evidence are still immature. Even if cases were mounted and won, the penalties are not severe enough to act as an effective deterrent, particularly given the scale of the financial benefits yielded to cyber-criminals.

5.2.2 Consumer Protection

Consumer protection mechanisms exist which may potentially apply to our situation including product liability, misleading and deceptive conduct, misrepresentation and the tort of negligence.

Product Liability:

Consumers are generally protected from faulty products and onerous contractual terms by the Trades Practices Act 1974 (Cth) (TPA) and the various Sale of Goods Acts (SGAs). Both the TPA and SGAs contain sections containing the term ‘goods’. The law imposes certain terms on some contracts, and confers benefits to consumers in specific contexts when dealing with ‘goods’. A contentious issue has consistently been whether software is a ‘good’.

The 2005 Australian Trade Commission (ATC) decision in the Amlink case found software to be a ‘good’.[1] In reaching his conclusion, Senior Member McCabe compared products that were for the supply of know-how or intellectual property with those which provided a contract for the supply of goods. Though classifying software as a ‘good’ in the Amlink decision, the ATC fell short of delineating whether software which is not attached to a physical object (eg. CD Rom) could be classified as a good. As such, software companies can continue to insert warranty clauses into their terms of use shielding them from lawsuits for software security flaws leading to financial loss.

Misleading and Deceptive Conduct:

Section 52 of the Trades Practices Act (TPA) looks at misleading and deceptive conduct and is the pivotal consumer protection provision. This provision has been used in a variety of applications from defamation to confusion of goods to rival traders to more general protection of consumer interest (see, for example, the case of Merman v Cockburn Cement and Swan Portland Cement;[2] as well as Concrete Construction v Nelson[3]).

The Australian Competition and Consumer Commission (ACCC) has jurisdiction in matters under the TPA. While only a court may determine whether a provision under the TPA has been contravened, the ACCC investigates complaints, negotiates settlements between parties and may bring suspected contraventions to court.

Whether or not a good would cover a consumer device, or, in particular, software applications running from a device remains unclear. Many technical devices and software programs contain provisions in their contracts excluding warranty for performance. In particular, contracts often exclude any warranties or conditions for the malfunctioning of their device or software when used to interact with other devices, services or software. Should financial institutions decide to offer security software to their users, or to endorse certain security vendors, there is a possibility depending on the structure of endorsement, that s. 52 could apply. This is particularly so where a security product is offered as an effective tool in combating phishing, spyware, and other forms of unwanted software, but where the product fails to perform resulting in an unauthorised banking transaction. The validity of such clauses remains largely untested in Australian courts. Courts can order parties to pay damages and grant injunctions, while breaches of s.52 can lead to criminal prosecution (same for s.53 of the Act dealing with misrepresentations).

Misrepresentation:

Section 53 of the TPA addresses false or misleading representations by a corporation in trade or commerce, or in the connexion with the supply, service or promotion of supply of a goods or service. The following types of false representations may be relevant to online banking transactions with consumer devices:

• falsely represent that goods are of a particular standard, quality, value, grade, composition, style or model or have had a particular history or particular previous use; (s.53(a))

• falsely represent that services are of a particular standard, quality, value or grade; (s.53(aa));

• represent that goods or services have sponsorship, approval, performance characteristics, accessories, uses or benefits they do not have; (s.53 (c)) and

• make a false or misleading representation concerning the existence, exclusion or effect of any condition, warranty, guarantee, right or remedy (s.53(g))

Issues of whether software is a good, a bank’s endorsement of security software, and the types of warranties typically found in device and software contracts are similar to those previously articulated for misleading and deceptive conduct in section 52 of the TPA.

Negligence:

Where an entity knew or ought to have reasonably known that the use, sale or reliance on a device, equipment or network contained security vulnerabilities, there is the possibility of a civil suit using the tort of negligence. The scope of negligence is sufficiently broad to allow anyone in the chain of information leading to an unauthorized online banking transaction to potentially be liable. This could conceivably include the consumer, device manufacturer, software developer, and financial institution. In order for an action in negligence to succeed, it must be shown that there was a duty of care between the parties, and that physical damage was sustained. There are a number of enforceable contracts and agreements leading to a duty of care. For example, the bank will be responsible for maintaining a secure network making sure that none of its equipment malfunctions. Likewise, some warranties may apply to consumers of devices. Where device vendors sell insecure products, they may be exposed to liability. The same may also be said for software developers providing that software is seen as a good. Where a user is made aware of a threat or vulnerability and fails to take reasonable measure to remedy the defect, he or she may also be found partially liable for any damages sustained to a party.

The principles of negligence are all good in theory. Consider the following scenario. Consumer writes his password on a yellow post-it and keeps it in a locked drawer in his office. He uses his mobile phone to access the Internet where he transfers money from a bank account in Hong Kong to Australia. The Hong Kong bank is undergoing network system upgrades. The Australian bank has been experiencing difficulty with spoof sites causing a number of consumers to log onto such fake sites where a keylogging program captures their usernames and passwords. Consumer is away on business for 2 months. He has recently moved within Australia so that his mail is being forwarded to him from a previous address. The consumer is using a work computer with an ineffective anti-virus program which has not been updated in over 4 months. Unbeknownst to the consumer, his computer contains a number of pirated components such that any security patches through anti-virus companies are ineffective. In a number of transactions over the period of 30 days, $5 000 AD fraudulently disappears from his Australian bank account, each time in denominations of $200. He is not made aware of this until he receives his bank statement some 3 months after its occurrence. He notifies the bank at this time. The second part of the test is easily met, damage, but the issue who owes who a duty of care is vague and is problematic from a remoteness perspective. More problematic, however, is the notion of cause. Who caused the unauthorised banking transactions is virtually unascertainable. Digital evidence and forensics issues exacerbate the problem (see below). At best, the law would have to make an educated guess as to the cause of the unauthorised transaction, and more likely than not, all parties would be seen as having contributed (though perhaps to varying degrees) to the problem.

5.2.3 Banking Law

The EFT Code is the chief source of obligations between the customer and financial institution for instances of consumer device failure, and unauthorised bank transactions. The most relevant section is Clause 5: Liability for Unauthorised Transactions. Under the current scheme, clause 5.2 identifies instances where the user would clearly not be responsible for losses:

(a) losses that are caused by the fraudulent or negligent conduct of employees or agents of the account institution or companies involved in networking arrangements or of merchants or of their agents or employees;

(b) losses relating to any component of an access method that are forged, faulty, expired, or cancelled;

(c) losses that arise from transactions which required the use of any device or code forming part of the user's access method and that occurred before the user has received any such device or code (including a reissued device or code). In any dispute about receipt of a device or code it is to be presumed that the item was not received by the user, unless the account institution can prove otherwise. The account institution can establish that the user did receive the device or code by obtaining an acknowledgment of receipt from the user whenever a new device or code is issued. If the device or code was sent to the user by mail or email, the account institution is not to rely only on proof of delivery to the user’s correct address as proof that the device or code was received by that person. Nor will the account institution have any term in the Terms and Conditions which deems a device or code sent to the user at that person’s correct address (including an email address) to have been received by the user within a certain time after sending; or

(d) losses that are caused by the same transaction being incorrectly debited more than once to the same account.

Financial institutions must provide an effective means for customer notification of compromised accounts and unauthorised transactions. Consumers are not liable for unauthorised transactions occurring after notification. Where a bank can prove that a user contributed to the loss, the consumer is generally liable for losses. According to clause 5.5, the consumer may not, however, be liable for pre-notification losses exceeding the daily and periodic transaction limits, and losses beyond the account balance.

Clause 5.6 identifies situations where the customer/user would contravene the Code and, therefore, have contributed if not caused the access method to be compromised. Applicable situations include where a user voluntarily discloses the code, the code is placed on or around a device used to access the account (eg. yellow post-it under keyboard or number on ID token), recording code on article, selecting forbidden access codes (eg. Birthday or name); or acting with extreme carelessness to protect the security of access codes. In short, while users may record the access code, they have a general obligation to keep the code safe and protected.

Clauses 6 and 8 of the Code impose additional obligations on financial institutions making them liable for equipment and system malfunction, along with failed or compromised network arrangements (Eg. Retailers for EFTPos).

5.2.4 Generic Challenges for High Tech Crime

As touched upon briefly in the previous sections, any potentially applicable legal provision will likely be thwarted by generic challenges arising in what are referred to as high tech crimes and technology related cases in general. These generic challenges are present in many situations involving high tech.

Burden of Proof:

In any criminal matter, the Crown has the full burden of proof to demonstrate beyond a reasonable doubt all elements of a crime before the accused may be found guilty. This elevates the threshold required in determining the actual cause of a security breach, whether it be technical or otherwise. As will be shown in this section, digital forensics is not able in most situations to accurately diagnose the exact cause of a security flaw or an unauthorised transaction. Other matters such as the location of a criminal (often in a foreign jurisdiction), and often the age of the defendant complicate matters further.

In civil suits, it is of little matter which consumer protection mechanism is used whether it be the tort of negligence or provisions in the Trades Practice Act, the legal system presents each with sufficient deterrence to bring an action to court. The plaintiff (a consumer, or perhaps an association or watchdog agency on behalf of a class of consumers) bears the burden of proof based on the balance of probabilities. Proving negligent conduct is, in practice, very difficult. A successful litigant would need to present sophisticated technical evidence that requires expert testimony. Opportunities abound for respondents' experts to counter litigants' experts. In any event, digital forensics remains an immature science inasmuch as proving on the balance of probabilities the flaw or reason for an unauthorised bank transaction. In other words, who and what is the exact cause for the security breach remains muddy water. Judges routinely permit respondents great latitude to delay the case for long periods, and to raise the costs of litigation very high. Hence the costs commonly greatly outweigh the financial harm for which reparation can be sought.

The EFT Code discussion paper does not give a clear indication of who is to bear the burden of proof. Transactions are generally posted to consumers' accounts when the financial institution receives notice of them. So a contested transaction stands until and unless the consumer takes action that causes the financial institution to reverse it. Hence the burden of proof, in reality, lies with the consumer. The changes being considered for the EFT Code would see allocation of liability shifted to the consumer for malware damage where the consumer did not implement ‘minimum’ (or perhaps ‘adequate’, or ‘reasonable’) safeguards to secure the computer device.[4] It remains unclear whether the consumer would have to prove that he/she took minimum safeguards, or whether the bank would have the onus to prove that the consumer did not take minimum precautions. The latter would involve the bank conducting a physical examination of the consumer’s device, along with each level of equipment and software involved in a transaction including its own network. Such latitude is currently not expressly required under Australian law. New Zealand, on the other hand, has addressed this problem in section 32 of the new Code which states:

We reserve the right to request access to your computer or device in order to verify that you have taken all reasonable steps to protect your computer or device and safeguard your secure information in accordance with this Code. If you refuse our request for access then we may refuse your claim.

Aptly put by banking expert Professor Alan Tyree:

There is, unfortunately, no clear indication of who bears the burden of proof in determining if [the unauthorised transactions] sections apply. Basic principles indicate that the institution has the burden of proving that it is entitled to debit the account, but real life has shown that institutions will debit an account in almost all conditions unless the customer can prove that they have no right to do so.[5]

Regardless of who bears the burden of proof, forensics and evidence issues present greater challenges to a law suit.

Digital Evidence and Forensics:

High tech crimes are often the most difficult crimes to prosecute. The complexities of the case which arise from the ability to explain the technical dimensions alone are enough to sink a case. Imagine presenting a case where the crimes committed are explained with, ‘a root access by a buffer overflow in which memory was overwritten by other instructions which allowed the attacker to copy and execute code at will and then delete the code, eliminating all traces of entry (after disabling the audit logging, of course).’[6]

The High Tech Crime division of the Australian Institute of Criminology has published a number of publications which address digital evidence issues. These issues include a general lack of trained computer forensics experts; the often necessity of outsourcing forensics work where ‘non-police’ are involved; evidence must be collected in compliance with the law (cannot obtain a warrant to wire-tap someone in Latvia, or cannot compel an Internet service provider (ISP) in Mongolia to provide data logs); not all ISPs are required to maintain detailed data logs; facilitation of the transmission of data logs to the police is not always done in a timely fashion; co-mingling of data where data retrieved may relate to a number of persons who share a computer or email account; volatility of digital evidence (ease in which one may alter or damage evidence whether it be accidentally or intentionally); relative ease of expunging volatile evidence; and the effective ‘Trojan horse’ defence where a party claims that they are not responsible for an action, but rather, a Trojan horse or another malware is to blame.

Often digital forensics involves the examination of large amounts of data perhaps best illustrated by way of example:

The amount of information gathered during the investigation in Operation Firewall by the United States Secret Service is estimated to be approximately two terabytes – the equivalent of an average university’s academic library.[7]

While digital forensics for an unauthorized banking transaction will not involve two terabytes of information, it will still involve a large amount of data. In order to reduce process time, digital forensics often use examination analysis techniques such as hash filtering which impact on processing time as well as the accuracy of the results. Add to this the inherent volatility of digital evidence, and a successful recipe for expunging of evidence is created.

The quality of digital evidence is likewise a problem for legal suits outside of the criminal arena. With most Internet-connected devices, a great number of software applications may be installed. This becomes important from legal and evidence perspectives, because a technical vulnerability may be derived from a physical feature in the device, at the point of connection to the device, from the use of particular software, or, all too commonly, from combinations of several such factors. The source or cause of a security deficiency is difficult to diagnose. This leads to difficulty in allocating liability. Under the proposed revision of the EFT Code, consumers will be liable for financial loss where they did not take minimal (or adequate) steps to secure their devices. Such language eliminates the need to determine the actual cause of an unauthorised transaction. As previously articulated, the scope of ‘minimal’ or ‘adequate’ has yet to be discussed. Under the New Zealand Code, responsibility is clear albeit grossly unfair:

(iv) Your computer or device is not part of our system therefore we cannot control, and are not responsible for, its security. However, we will inform you, primarily through our website, how to best safeguard your online information and the steps you should take to protect yourself and your own computer from fraud, scams or unauthorised banking transactions.

In addition to non technical advice (such as not leaving your computer/device unattended when you are logged on to Internet Banking or not using shared computers like those in Internet cafes to access Internet Banking), we will also have on our website available information and advice on the benefits of installing and maintain protection, in respect of, for example:-

• anti-virus software;

• firewalls;

• anti-spyware; and

• operating system security updates

The New Zealand model says, ‘hey, here are some tips but you’re on your own when it comes to your computer/device’s security’.[8] The Australian model appears to say, ‘hey, here are some tips, take some unknown minimal or perhaps adequate measures, and you’re not on your own.’ As this paper has demonstrated, even when a consumer follows safe information practices, installs and regularly updates his/her anti-virus, firewall, anti-spyware and operating system, the consumer is still vulnerable to a host of attacks. Consider further that many eCommerce and even eBanking services only work because they exploit vulnerabilities on consumer devices. When seen in this light, it does not appear fair to allocate liability to the consumer.

Jurisdiction:

High tech crimes often involve parties located overseas. High tech crimes may involve many people located in different jurisdictions whether it be in different states in a country, or different countries altogether. Each jurisdiction will have its own law dealing with an issue as well as its own unique set of evidence procedures in courts. Uniformity is a real problem. A successful prosecution often involves assistance and cooperation of authorities from an outside jurisdiction. For a variety of reasons, some jurisdictions may or may not be willing to cooperate. Such cooperation generally must proceed through the cogs of bureaucracy in cases where time and access to good digital evidence (unaltered) is of the essence. This often means applying for warrants in multiple jurisdictions which may translate into a loss of valuable time and perhaps a loss of obtainable evidence. Of course the biggest challenge remains in identifying and determining the physical location of the computer, and then the actual individual(s) who used the computer/network to commit a crime.

5.3 Implications for the EFTS Code

The changes being considered for the EFTS Code would see allocation of liability shifted to the consumer for malware damage where the consumer did not implement ‘minimum’ (or perhaps ‘adequate’, or ‘reasonable’) safeguards to secure the computer device. This sub-section considers that proposal in light of the preceding analysis.

The discussion paper does not give a clear indication of who is to bear the burden of proof. Transactions are generally posted to consumers' accounts when the financial institution receives notice of them. So a contested transaction stands until and unless the consumer takes action that causes the financial institution to reverse it. Hence the burden of proof is readily interpreted as lying with the consumer.

The consultation paper sets out two policy principles: the ‘least cost avoider’ principle (para. 7.10), and the simplicity principle (para. 7.11). This paper has demonstrated the infeasibility of devising a secure payments scheme that depends on consumer device security. Hence the ‘least cost’ approach dictates that the service-provider must deliver security from the server end. The simplicity principle states that ‘broad standards such as “the user takes all reasonable steps to keep the access method safe” are less appropriate than specific standards’ (cl. 7.11). Yet the draft does not discuss what types of specific standards are to be imposed; and, in any case, the analysis conducted in this paper suggests that no simple statement of specific standards is feasible. Hence the scheme described in the discussion document could not possibly be equitable.

Instead of placing the onus of proof on the consumer, the logical locus of responsibility is the service-provider. If a corporation wished to shift to the consumer the responsibility for loss arising from a particular transaction, then the corporation would need to be able to (in the terms used in the EFT Code of Conduct at clause 5.5) ‘prove on the balance of probability’ that the loss arose in large part because of one or more specific deficiencies in the consumer's device.

This would require access by the corporation to the device, in a state that held some time previously. That probably implies that the device itself has to be taken from the consumer. In any case, the science and (largely) art of security safeguards is in many circumstances incapable of determining what vulnerability was exploited by what attack, let alone of providing information of evidentiary quality in support of the conclusion reached (explored in greater detail in section 6).

A particular irony in the situation is that a great many web-sites that support transactions depend on advanced and intrusive programming techniques such as cookies, Javascript, Java, ActiveX controls and AJAX. But it is precisely these threats that safeguards need to block in order to achieve consumer device security. Hence those consumers who actually adopt appropriate safeguards would be to a considerable extent precluded from conducting transactions and making payments on the Internet.

In most circumstances therefore, it is logically untenable for corporations to argue for a shift in liability. In delivering services to consumer devices over the Internet, corporations are depending on insecure infrastructure, and they must carry the responsibility for doing so.

6. Other Approaches

Given the enormous range of vulnerabilities, the ineffectiveness of safeguards, and the serious difficulties involved in imposing liability on consumers, it might appear that consumers will have to get away scot-free, irrespective of the degree of recklessness with which they use the Internet to conduct transactions. Clearly it is not in the interests of society or the economy for that to be the case. It would be preferable to provide an incentive for them to take due care.

Constructing such a scheme is challenging, however. The present arrangements already embody a requirement that consumers be careful. Consumers remain liable for the consequences of compromised credit-card details until they report the problem (EFT Code at 5.3), and where a security-code such as a PIN is not protected, they bear the first $150 of the loss (5.5(c)). If additional contingent liabilities are to be imposed on consumers, then similar, very carefully judged approaches are essential.

A related concern is that the Code at para. 5.4 suggests that the consumer could be liable where they have ‘contributed to the loss’. The effect of this is currently limited by paras. 5.5 and 5.6. Any amendments to the Code would need to be carefully constructed, however, in order to ensure that consumers did not become disproportionately responsible for losses.

There would be considerable benefits in a multi-partite scheme to:

• educate consumers;

• provide on-demand advice to consumers;

• provide pre-packaged security-settings for download and installation;

• make appropriate software readily available; and

• provide straightforward advice on how to install and configure such software.

Such a scheme would require substantial funding. It would need to involve financial institutions, retailers and software providers (in each case perhaps through appropriate industry associations), and governments, in consultation with consumer representative and advocacy organisations. On the other hand, software suppliers might see the scheme as a threat to their business, and financial institutions are likely to seek to avoid the costs involved. Hence each of the various corporations may well obstruct any such scheme.

It is therefore inadequate to limit the discussion about appropriate incentives and disincentives to consumers. The discussion needs to extend to:

• financial institutions;

• merchants;

• suppliers of consumer devices; and

• suppliers of operating systems and applications that run on them.

The analysis conducted in this paper has demonstrated that consumer devices are inherently highly insecure. There is an urgent need for producers of these devices and the software that runs on them to abandon their cavalier attitudes of the past and take responsibility for producing devices that have far fewer and far less severe vulnerabilities.

Moral suasion and the stimulation of ‘self-regulatory codes’ are an inadequate response, because the changes required are attitudinal and substantial. Further, an increasing number and variety of organisations, which are currently outside the scope of the EFTS Code, need to be subjected to it. Hence formal legislation is necessary, to establish a regulatory framework, followed by co-regulatory work to enable the promulgation of enforceable codes that are practicable for industry, and that are adaptable sufficiently quickly as the patterns of technology and eBusiness change.

Because of the international nature of the information technology industry, at least bilateral discussions with US regulators are needed, and more likely multilateral discussions, through such venues as the Organisation for Economic Co-operation and Development (OECD) and the Asia Pacific Economic Cooperation (APEC).

7. Conclusions

It is generally perceived that eCommerce and eGovernment offer great scope for efficiency and service benefits. Consumer trust is fundamental to the widespread adoption of transaction-based services, particularly those services that involve payments. Considerable improvements in the security of the infrastructure used for Internet transactions is essential.

This paper has summarised evidence about the feasibility of the authorised users of consumer devices being made responsible for their behaviour, in particular in the context of financial transactions.

There are a great many ways in which a consumer device may conduct transactions, or conduct transactions in a way, that the device's authorised user did not intend, and is not even aware of.

Some safeguards exist that address some of the vulnerabilities and mitigate the effects of some of the threats. These safeguards are difficult and expensive to implement, and the safeguards are in any case incomplete, far from perfect, and not up-to-date. Moreover, some of the vulnerabilities are inherent in the hardware, network connections, systems software and application software. Quite simply, consumer devices are not currently safe, and cannot be rendered safe.

It is not practicable for consumers to achieve control of their devices. As a result, it is impracticable to make consumers responsible for the negative impacts of actions by their machines that they did not authorise. Further, it is untenable to remove the established consumer protections and make consumers responsible for losses arising from the use of consumer devices to make payments. Instead, it is essential that consumers be indemnified against those negative impacts.

Further, actions are needed by business and government to address the parlous state of consumer device security, and the almost complete absence of formal legal safeguards.


[*] Roger Clarke is Principal of Xamax Consultancy Pty Ltd, Canberra. He is also a Visiting Professor in the Cyberspace Law & Policy Centre at the University of N.S.W., a Visiting Professor in the E-Commerce Programme at the University of Hong Kong, and a Visiting Professor in the Department of Computer Science at the Australian National University.

Alana Maurushat is Deputy Director of the Cyberspace Law & Policy Centre, and a part-time lecturer and PhD candidate, all in the Faculty of Law at the University of N.S.W. She is formerly a lecturer and deputy director of the LLM in IT and IP in the Faculty of Law at the University of Hong Kong, where she continues to teach as a visiting lecturer.

[1] Amlink Technologies and Australian Trade Commission [2005] AATA 359.

[2] (1988) 84 ALR 521.

[3] [1990] HCA 17; (1990) 92 ALR 193.

[4] It should be noted that some banks, in particular NAB, have begun to offers customers discounts on security software. This allows its customers to purchase products which the bank considers secure against online threats. Whether this would constitute a minimum safeguard remains unclear. Of course, as demonstrated in this paper, no security software is immune from all online threats and vulnerabilities but this would be a start at eliminating certain threats such as less sophisticated forms of phishing attacks and some forms of spyware.

[5] Tyree, A Banking Law in Australia (5th ed, 2005) 379.

[6] Pfleeger C. & Pfleeger S. Security in Computing, 4th ed. (2006), 682.

[7] Choo, Smith & McCusker, Future Directions in Technology Enabled Crime, 2007-2009, Australian Institute of Criminology Research and Public Policy Series, No. 78 (2007), 88.

[8] It should be noted that some banks operating in New Zealand are not following the code in this respect. BNZ and Westpac, for example, do not require users to have up-to-date software. BNZ is moving towards compulsory two-factor authentication. There is no evidence, however, to support that banks operating in New Zealand are holding customers to the legal terms and they may, instead, be re-imbursing customers affected by fraud.


AustLII: Copyright Policy | Disclaimers | Privacy Policy | Feedback
URL: http://www.austlii.edu.au/au/journals/JlLawInfoSci/2007/2.html