![]() |
Home
| Databases
| WorldLII
| Search
| Feedback
University of New South Wales Law Journal |
![]() |
[2] For inter-personal
communications on the Internet, trust is achievable. This article
commences by briefly reviewing firstly the Internet and its use, and then the
nature of trust online, concluding
that privacy is a factor necessary to trust
in cyberspace. A brisk analysis of privacy risks in cyberspace (and various
methods of
dealing with them) leads into an argument that, while various methods
can be devised to protect privacy and encourage trust between
individuals in
their online dealings, the minimalist ‘fair information practices’
movement of the last thirty years is
utterly inadequate as a basis for providing
‘net-consumers’ with the privacy they need. The current situation
(which
has resulted from that movement) therefore prevents individuals from
trusting organisations and seriously constrains their preparedness
to deal with
them electronically. This article concludes that unless organisations establish
their trustworthiness with consumers
and citizens, the sluggishness in the
growth of electronic commerce will continue for years to come.
[4] These and other
services together create an ‘experience space’, in which people have
a ‘shared hallucination’.
While there is nothing physically
‘there’, if the parties suspend their disbelief, and perceive
themselves as having
a sufficiently common understanding based on the
information they are exchanging, then it seems as if there is, in fact,
‘something
there’. A sci-fi novelist coined the term
‘cyberspace’ as a means of referring not to the underlying
inter-networking
arrangements of the Internet, nor to the services built upon
that infrastructure, but to the virtual experience users share.
[6] Trust differs depending on the relationship between the parties. Economic relationships may be direct, as in principal-agent and contractual relationships. In many cases, however, a party may rely on another party despite having no formal relationship with them, or even much knowledge about them. (Examples from cyberspace include unthinking acceptance of the veracity of the contents of an email message or a website.)
[7] Trust may be relatively
unimportant where the risks that the parties are exposed to are limited and the
elapsed time during which
the exposure exists is quite short, or where the risks
are well known but insurance is taken into account in the costs. Where such
factors do not exist, trust tends to be crucial for transactions to take place
and relationships to develop.[2]
[9] A key reason for trust being a
substantially different challenge in cyberspace – in comparison with the
physical world –
is that the parties have little knowledge about one
another, and cannot depend on such confidence-engendering measures as physical
proximity, handshakes, body language, a common legal jurisdiction, or even
necessarily any definable
jurisdiction.[3] A range of measures
is needed to inculcate sufficient confidence in Internet users that economic
transactions can proceed and relationships
can be built online. These measures
include the availability of information that can be authenticated,
recommendations from trusted
parties (as distinct from ersatz, engineered
proxies for reputation, such as brand names and ‘seals of
approval’), message
and data security, limitation of risk exposure, and
other general safeguards against risk. This article focuses on a particularly
important factor relevant to encouraging trust online: ensuring users’
privacy.
[11] However, a variety of risk management approaches are available to users. A principal proactive strategy is avoidance, which can involve declining to use particularly threatening Internet services (such as Microsoft products generally), avoiding central storage of personal data, not divulging sensitive personal data such as contact points and credit card details, and storing sensitive data and performing sensitive procedures on equipment that is not connected to the Internet. Other proactive strategies include deterrence (for example, providing notice to marketing organisations that are suspected of gathering personal data that consent is explicitly denied), and prevention (for example, by implementing counter-measures such as ‘cookie’ managers and personal ‘firewalls’).[5] Additional approaches are reactive in nature: detection strategies include virus detection software and monitoring of the traffic leaving one’s own machine; recovery strategies include virus removal routines; and insurance strategies include backup of personal data complemented by clear plans as to how to recover from an invasion by harmful software. In some circumstances it may be rational to rely on the non-reactive strategy of risk tolerance: ‘I don’t have the time to consider it, or the money to address it, and if the worst happens, I’ll worry about it then’.
[12] Given the privacy
risks confronting people in cyberspace, however, caution is generally advisable.
Thus a tendency arises among
experienced players to adopt a proactive avoidance
strategy that includes denying other parties knowledge of one’s identity,
denying other parties information about oneself generally, and perhaps even
falsifying information about
oneself.[6] The following sections
consider the efficacy of some approaches by individuals and by organisations
(including governments and corporations)
to privacy protection in fostering
trust in cyberspace.
[14] It is also likely that information about the entity behind the nym will be disclosed to other participants, at the very least through the nym’s behaviour. Indeed, many social relationships in electronic fora involve what is sometimes referred to as ‘performance-based reputation’. Nothing is known about the person other than their ‘track-record’ or history in that particular context, yet other members of the forum may be quite trusting of the person behind the nym, unless and until they destroy their own credibility through behaviour or expression inconsistent with the persona they have developed.
[15] In many cases, however, denial of identity protects one party while preventing the other party from developing trust through shared information. An approach that is riskier for the first party, but more conducive to the development of trust, is to use a nym that is traceable but not readily so. Other parties can have some confidence that serious misbehaviour by a person (for example, criminal acts like harassment and fraudulent misbehaviour, and civil wrongs like failure to perform contractual obligations and insolvency) can be addressed by breaking through the protections surrounding the nym and identifying the individual.
[16] The challenge is to find suitable means
whereby legal, organisational and technical protections can be breached when
conditions
demand it, but not breached casually, even by a powerful organisation
(such as a government or a large corporation), simply when
the organisation
believes its interests have been harmed.
[18] The ‘fair information practices’ movement originated in American business and government circles in the late 1960s, but flowered in Europe during the 1970s. Substantial bodies of so-called ‘data protection’ laws have developed as a result and are still being refined. The model has been adopted and adapted in many non-European countries, resisted by the United States Federal Government, and bastardised by the Australian Government.
[19] The notion of ‘fair information practices’ has proven to be utterly inadequate, with inadequate scope, manifold exemptions and exceptions, and missing control mechanisms.[8] It has become so ingrained, however, that the focus of public policy is very difficult to shift away from the protection of mere data, back to the protection of people’s privacy.
[20] In the meantime, organisations continue to enthusiastically
develop and implement inherently privacy-invasive technologies, for
example, by
seeking to impose intrusive online identification, identity-authentication
mechanisms,[9] and person location
and tracking technologies,[10]
including controversial ‘digital signature’
schemes.[11]
[22] However, many privacy-abusive activities are subject only to organisational self-restraint and industry association codes. So-called ‘self-regulation’ is regarded by the public as completely lacking in credibility. Measures like meta-brands (for example, the ‘seals of approval’ provided by TRUSTe and WebTrust) and privacy statements are repeatedly breached, and seen to be breached, without any action being taken; the undertakings made are therefore nominal, unenforced, and in most cases unenforceable. Self-regulation is seen by the public for what it is: supervision of the sheep by the wolves, for the benefit of the wolves, and a means for business to establish a pretence of regulation in order to hold off actual regulation.
[23] European countries at least have a regulatory framework in place, even though its scope is quite inadequate for the ‘information age’ that was already very much in evidence late last century. Australia, however, is very different. Federal Privacy Commissioners seem to regard their role as restricted to a mere administrator of legislation. They talk pleasantly with the organisations that the public expects them to regulate, and they issue guidelines in relation to Internet usage that actively encourage organisations to invade their employees’ privacy in ways that would be illegal if applied to person-to-person conversations and the telephone.
[24] Far from enhancing trust between individuals, and between individuals and organisations, recent Australian legislation (in the form of the Privacy Amendment (Private Sector) Act 2000 (Cth)) has subverted the principles of privacy protection outlined in the 1980 Organisation for Economic Co-operation and Development Guidelines Governing the Protection of Privacy and Transborder Flows of Data in order to legitimise a wide variety of privacy-intrusive practices by private sector corporations. This law is an actively ‘anti-privacy’ statute. The current Privacy Commissioner’s laudable attempts to interpret the statute broadly enough to overcome some of its weaknesses are unlikely to succeed. It is a serious set back to hopes for privacy generally and for trust in Internet commerce in particular, and it demonstrates the inadequacy of the ‘fair information practices’ movement.[12] At a time when substantial new initiatives are needed, Australia is 30 years behind and going backwards.
[25] Contrary to popular mythology, the United States
(‘US’) is the country with the highest level of privacy regulation
in the world. However, the relevant legislation comprises large numbers of
highly specific statutes, created as ‘knee-jerk’
reactions to
particular issues and public concerns. Comprehensive legislation is still being
resisted, and, when it comes, will be
subject to massive subversion by the
corporate interests that fund American politicians. Yet the desperate need for
measures that
encourage trust in economic uses of cyberspace will eventually
force the hand of the US Congress and the
President.[13]
[27] Organisations, and some individuals, are using the potential that
Internet technologies provide to abuse these aspects of privacy
by submitting
users to privacy-invasive measures such as surveillance techniques. Technical
devices such as ‘click-trails’,
‘cookies’ and
single-pixel images (referred to in the popular literature as
‘web-bugs’) are used to complement
simpler ideas like cajoling
net-consumers to provide large quantities of personal data in return for very
little recompense, and
the pooling of behaviour-related data among
companies.
AustLII:
Copyright Policy
|
Disclaimers
|
Privacy Policy
|
Feedback
URL: http://www.austlii.edu.au/au/journals/UNSWLawJl/2001/8.html