AustLII Home | Databases | WorldLII | Search | Feedback

Journal of Law, Information and Science

Journal of Law, Information and Science (JLIS)
You are here:  AustLII >> Databases >> Journal of Law, Information and Science >> 2014 >> [2014] JlLawInfoSci 7

Database Search | Name Search | Recent Articles | Noteup | LawCite | Author Info | Download | Help

Adams, Andrew A --- "Facebook Code: SNS Platform Affordances and Privacy" [2014] JlLawInfoSci 7; (2014) 23(1) Journal of Law, Information and Science 158


Facebook Code: Social Network Sites Platform Affordances and Privacy

ANDREW A ADAMS[*]

Abstract

Social Network Sites (SNS) have become a very common part of life for a majority of regular Internet users. The implications of this usage for the privacy of users is a topic of significant concern socially and legally, and with respect to multiple parties: their connections, unconnected other users of the site, other ordinary Internet users, platform operators, other commercial organisations and governments. Some claims have been made that because users submit a significant amount of their information directly and voluntarily to these sites, that such usage should all be regarded as voluntary and subject to no significant privacy controls. In this paper the relevant sociological and psychological literature (general and specific) on the actual level of control users have over their actions and their data is presented, with the result that greater regulatory control is suggested.

Introduction

The difficulties of maintaining some form of privacy in the new age of social media have received significant attention and study in both the mainstream media[1] and the academic literature[2] of late. Regulators have struggled with the problem of users revealing highly sensitive information freely online, sometimes attempting to restrict whom can see it and whom cannot, but rarely having real control. When users themselves are, apparently happily, giving their data to Facebook and Google in return for their free services, do regulators have any role to play?

In this paper, the influence of some standard psychological theories such as post-decision bias and peer pressure is combined with theories of the social impact of technological systems’ architecture (and in particular their affordances[3] with respect to the flow of information control) to suggest that the role of regulators is becoming more and more important in this area. Users are not well-placed to protect themselves due to various facets of the way social network services work, the psychological and practical limitations on users’ efforts, the technical difficulties of providing good tools for controlling information flow, and user and platform incentives for over-sharing in the short-term in spite of long-term dangers.

1 Theories from Information Law and Ethics

Wiener,[4] hailed by many as the founder of information ethics,[5] warned of the dangers inherent in his new science of control, Cybernetics. The governance of technical systems and its theoretical counterpart of information theory, which were his primary concerns, raised issues about the impact that the growing control over information embedded in the rapidly developing computers of the time could have on individual humans and the societies they formed. Meyrowitz[6] chronicled the impact of television on breaking down the barriers to the flow of information between places, and the impact that had on people’s sense of their place in the world, and particularly their sense of place in society. Although not referring to Wiener and his theory of control, Meyrowitz’s sense of place and the impact of television on that sense reflects Wiener’s concerns, and Wiener’s foreshadowing of McLuhan and his statement that ‘The Medium is the Message’,[7] although for Wiener, the organisation (the entity) was the message. The key message of these thinkers is that the flow of information is of immense significance in creating the social and physical world in which humans live.

Lessig[8] drew attention to the impact that the Internet was beginning to have on commerce, people and society. In particular, he drew the attention of lawyers and lawmakers to the fact that the methods of transmitting information are decided by engineers who do not exist in a social, moral or psychological vacuum, and that the decisions of these engineers can have enormous consequences (both deliberate and unforeseen) on society, consequences which can only be affected by law when their creation, deployment and utilisation are understood and where the other forces of economics and psychology/sociology are also considered. Galloway[9] expanded upon these concepts and focussed first on the role of the protocol (the set of rules defining valid and invalid message exchange formats) and then switching his focus to the idea of the interface,[10] a broader conception which includes not just the protocol, but the ways in which the protocol is written, used, revised and supplanted.

So, the technical systems we use, have embedded in them inherent biases about how they will be used. Sometimes these biases are explicit and intended parts of the system, deliberate choices made by the designers. Just as often, however, the implications of a particular decision by system designers are unforeseen or even unforeseeable. The front-end capabilities offered by systems not only channel users’ immediate use of them towards those uses intended by the designers, but in the longer-term can have a strong influence on the attitudes of users towards the social norms embedded (one might even say embodied) in those systems, on issues such as anonymity

/pseudonymity/‘real’ names, the boundary between public and private information, the boundary between work and personal life and social circles, and the use to which data collected by large organisations can be put.

2 Theories from Psychology

The concept of post-decision bias, that after we make a decision, our values change to reinforce the quality of that decision was demonstrated by Brehm.[11] This initial experiment involved asking participants to rate the relative qualities of a variety of consumer goods. In return for taking part they were then allowed to choose one of the items as a free gift, and then asked to repeat their evaluation of the goods. Subjects consistently changed their evaluations to give higher weightings to the positive aspects and lower weightings to the negative aspects of the chosen item, while deprecating the virtues and enhancing the failings of the items not chosen. This alteration of values applies to more than just relatively free choices. Aronson and Carlsmith[12] showed that children threatened with mild punishment for playing with a highly attractive toy while adult supervision was not present deprecated the values of the toy quite strongly. However, those threatened with strong punishment tended to exhibit no or only limited deprecation of the attractions of the toy. This is interpreted to indicate that when we are aware of strong external factors influencing a decision (particularly one which is against our immediate desires) then we can maintain our attitudes. Faced with low-level but still sufficiently compelling external factors then we must change our attitudes to make the external factors sufficient to justify our choices.

In addition to adjusting our value sets to reinforce our self-image as good decision makers, human beings are also subject to strong peer pressure in both how we act and how we think. Strong examples of how our role as social animals pressures us to go along with a group opinion, even where we are quite convinced that the group is wrong, were demonstrated by Asch’s[13] experiments, and a meta-analysis forty years later provides very strong evidence for this psychological process.[14] While in Asch’s experiments, there is clearly a right and wrong answer (the question asked is mathematical), the influence of group behaviour in social norms, where ‘right’ and ‘wrong’ are to a great extent socially defined (albeit within a personal ethical framework), can be even greater.

We are also influenced by information presented to us immediately before making a decision, a principle called ‘priming’ in social psychology. Priming can be a significant problem for psychological and social science research in that the specific mention of a topic (privacy for example) can raise an interview participant’s or experiment subject’s awareness of that value over normally competing values. The process of priming affects not only values but physical behaviour. Merely creating the circumstances in which certain stereotypes are ‘in mind’ causes those thus exposed to subconsciously imitate the stereotype, to the extent that simply briefly exposing people to words associated with ‘the elderly’ (but not even including explicit mention of elderliness) causes subjects to walk more slowly after the experiment.[15]

It is not only our current values and future decision-making which are affected by forces such as post-decision bias, but also our memory of decisions made in the past, and the reasons we had for those decisions, may be changed as a result of later pressures. Ross and Conway provide[16] a well-argued case that our recollections of our own actions in the past, and even more so our recollections of our attitudes and the reasons for our actions, are subject to significant shifts so as to maintain an illusion of continuity between the past self and the current, such that changes in attitudes are glossed over and minimised in memory.

3 Post-Decision Bias and Pseudonyms/Real Names in SNS

In previous work,[17] we reported a difference of opinion between Japanese and UK university students in some of their attitudes to SNS usage. In 2008, Japanese students, almost all users of the Mixi system, extolled the virtues of pseudonymity online, calling it stupid to reveal personally identifying information such as one’s university, department and degree course. They had the attitude that everyone to whom one is connected should already know these things anyway so why put them up online and take risks. UK students who used Facebook were more positive about connecting with people other than their current in-person social set (for example those with whom they went to school with but who went to other Universities/colleges or left education). They felt that the real name policy on FB helped them have confidence about who they were connecting with. In 2011, in a so-far unpublished follow-up study, similar interviews with a new cohort of Japanese students[18] revealed similar attitudes towards the real name policy as their 2008 UK counterparts, in contrast to the 2008 Japanese cohort. While this could be a generational shift in Japan, it seems unlikely and the more likely reason is post-decision bias: once one has made the decision to use Facebook, one’s obedience to Facebook’s authority on privacy issues drives one towards real names and away from pseudonyms. This is to the detriment of users on privacy, since basic details of profiles are now always visible to everyone on Facebook via a search for first or last name.[19] It also reduces their security since users seem to have inherent trust in the Facebook real name policy, which actually has very limited enforcement facilities, leaving them open to spoofed friend requests.

4 Privacy by Design and Default

Boyd and Hargatti[20] challenged the idea that young Facebook users do not care about their privacy, demonstrating by a year-to-year comparison amongst a single cohort of US 18- and 19-year-olds that many of them changed their Facebook privacy settings between 2009 and 2010, in general changing them to be less permissive. Stutzman, Gross and Acquisti [21] showed that over time a single cohort of Facebook users from Carnegie Mellon University (CMU) had a highly significant tendency to reduce their sharing over a much longer period, except in 2010 when Facebook made a significant number of changes to their privacy settings interface which included minor redefinitions of previous sharing options and a bundle of new sharing options, all of which were set to share rather than not share. This was even the case regarding settings for which any reasonable interpretation of previous settings to not share would be that the user, if aware of the change, would retain the ‘not sharing’ setting. Since 2010, the trend amongst the CMU cohort has resumed its less sharing/more privacy trend, from a higher base again. As Mackay showed[22] therefore, and as has been demonstrated in the context of privacy by Facebook and other companies time and time again, default settings matter a great deal. Many users never change defaults and even those that do tend only to change some of them.

If privacy is to have real meaning for the majority of users, the default settings for sharing must be limited to a close set of people. The clear trend for defaults on Facebook, however, is for more and more information from users’ profiles to be more and more visible as is graphically demonstrated by McKeon.[23] In addition, it should be completely obvious to users when the information they are seeing about others is ‘for their eyes only’ or publicly available. While it is impossible to both allow access to information and deny access at the same time (which is what is necessary for any technological anti-copying system to work), clear information provided to users on the originator’s intention as to whom should be able to see information should be sufficient, as long as it matches social norms, to encourage significant observation of the restrictions by those with legitimate access. Randi Zuckerberg, the former head of Marketing for Facebook, and sister of its inventor, chairman and CEO (Mark Zuckerberg) publicly expressed dismay that someone might take a photo they could see on Facebook and re-post it to Twitter without asking permission.[24] She claimed this was not about privacy or privacy settings but about ‘human decency’. This claim attracted a fair amount of derision, particularly since the person who had reposted the photo more publicly had done so honestly believing that she had seen the photo in a publicly posted place and not in a private space. When even former executives of a company running a social network such as Facebook are unhappy with the privacy effects, it is hard not to conclude that the system includes neither privacy by design nor privacy by default.

5 User Incentives and Intentions

Users primarily join social media platforms to be social with others. Without the desire to interact with others through that medium, they would probably not join. This seems obvious, but in fact there are other reasons that people join such sites. The novelist Charles Stross, for example, has registered as a Facebook user to pre-empt impersonation on the site but states that he finds his other online presences sufficient for his private and public social interaction needs and hence merely maintains the Facebook page with links to his preferred systems.[25]

Others may register for a site in order to ‘lurk’ or to try to ensure their privacy. For example, Facebook provides a service which allows users to link photos they post with people shown in them, or even add name links to photos posted by third parties (ie neither the person providing the name link nor the person being named). Photos can of course include associated information with links to external sites, or even just text data giving names for people (supposedly) in the photo. If one is a Facebook member the easy technical solution is to use a Facebook ‘tag’ — a link to someone’s Facebook profile. The benefit to the person thus named is that Facebook’s privacy settings allow such tags to be automatically rejected or to be kept invisible until approved by the named person, or easily viewed by them. Given that this facility exists, bypassing it by using other approaches to name someone in a photo could easily be seen as rude. When the person to be named is not on Facebook, however, their willingness or not to be associated with photos in general or this photo in particular is less easily discerned. Thus people who desire significant privacy levels may join Facebook in order to control their profile as much as possible, engaging in a trade-off of losing some privacy by signing up to Facebook’s terms and conditions (which includes using one’s real name)[26] and thereby engaging in the kind of privacy-privacy trade-off considered by Henne and Smith.[27]

The vast majority of users do, however, join social networks in order to interact with others on the network. Some do so willingly and enthusiastically, extolling the virtues of the system to everyone they know. Others do so reluctantly,[28] particularly some teens for whom online social networks are either their only opportunity to relate to their peers due to physical movement restrictions, or because engagement in the parallel online social network is a necessity for maintaining membership of the real world network: so much information is passed purely via electronic exchange that lack of engagement online leads to isolation offline as well.

As shown by boyd and Hargittai,[29] and by Stutzman, Gross and Acquisti,[30] however, as people become more familiar with social networks they tend to become more careful about their privacy settings. These settings, though, tend to focus on the visible and directly apparent effects of visibility. As demonstrated by King, Laminen and Smolen[31] there is limited understanding by many users about issues such as the use of data by the platform operator and by third parties such as App developers (on the Facebook platform as well as on various smartphones operating systems). This is consistent with the generality of concepts of privacy in Japan described by Adams, Murata and Orito[32] where the biggest privacy threats are seen to come from those with whom one has regular dealings but who are not inside a close sphere of trust. Just as in Japan, general concerns about the growth of information processing, and its impact on people’s lives, by third parties led to a relatively swift shift in attitudes and the adoption of (unfortunately very weak) data protection laws,[33] so too have concerns about app developers generated disquiet but only limited action by users.[34]

6 Conclusions

6.1 The psychological implications of SNS’ affordances

Lewis, Kaufman and Christakis[35] showed that decreasing sharing on Facebook settings followed a group influence pattern, while boyd and Hargittai,[36] and Stutzman, Gross and Acquisti[37] showed a trend of users becoming more restrictive in what they shared and with whom over time. On the other hand, Stutzman, Gross and Acquisti[38] also demonstrated the effects that platform defaults have, particularly on increasing sharing beyond the desires of the users. When taken together, these and similar studies show the multiplicity of influences on users’ behaviour with regards to personal information (both their own and those of known others and strangers). These influences include a strong element of the code of Lessig[39]/the protocol of Galloway,[40] ie, the decisions made (deliberately or as unintended consequences) by platform creators. However, they are also strongly influenced by social norms, as again suggested by Lessig[41] and as we might expect from the psychological theories of Asch.[42]

Boyd and Hargittai,[43] and Stutzman, Gross and Acquisti [44] showed that users care about their privacy, however, as Galloway[45] puts it, the ‘Interface Effect’ undermines the agency of the user by defining the paths they can follow and/or the paths that are most easily followed. Kahneman[46] provides a detailed explanation of various ways in which people make poor decisions based on accurate information presented in circumstances in which an inappropriate type of thinking is applied to the problem at hand. Bastiat[47] wrote, over a century and a half ago, about similarly flawed thinking in explaining poor economic evaluations distinguishing between ‘What is Seen and What is Not Seen’. In terms of their usage of social networking systems, users primarily see the direct benefits of social interaction with their peers. The longer term consequences of over-sharing information, through a variety of mechanisms (the use of the information to create a profile by the platform operator, the transitive passing on of private information by one’s contacts to third parties and further, one’s visible Internet profile seen by potential future employers) are generally not seen and either ignored or discounted by most users until and unless the potential negative consequences actually happen to them. By then the damage may have been done and be as irreparable as the economic loss of Bastiat’s broken window.[48]

6.2 Regulatory options

The regulation of privacy is a significant topic of debate, with the EU, at time of writing, negotiating the details of replacement data protection legislation. Meanwhile, countries such as South Korea have already implemented new principles.[49] These include: the requirement to unbundle permission for the processing of data necessary to the interaction between subject and processor and data the processor would like to have but which is not necessary for the purpose at hand; and that services may not be offered on the basis of carte blanche permission to process all data held on the system. There are those who argue that strong regulation impedes innovation in the Internet economy[50] and that without the ability to marketise all data held that companies like Facebook would not exist, and that users would be all the poorer for that. We can see from both psychology theory and empirical data, however, that users are not making choices of informed consent regarding their use of online systems. As such, it is the duty of governments to provide a fairer playing field. They should not seek to prevent innovative uses of personal data nor prevent users from sharing should they so wish, but as data protection law evolves they should seek to provide users with empowerment to choose, defaults for safety and penalties for blatant disregard for users’ rights, particularly where they are most vulnerable to manipulations. Regulators should, in particular, be wary of arguments that users’ actions and responses to surveys provide clear guidance to users’ desires and self-interests. Given different circumstances, users value different things and the job of legislators and regulators should be to ensure that users’ benefits are properly represented by system design and defaults.


[*] Centre for Business Information Ethics, Meiji University, Tokyo, Japan. <aaa@meiji.ac.jp>.

[1] G Gates, ‘Facebook Privacy: A Bewildering Tangle of Options’, New York Times (online), 12 May 2010

<www.nytimes.com/interactive/2010/05/12/business/facebook-privacy.html>; B Johnson, ‘Privacy no longer a social norm, says Facebook founder’, The Guardian (online) 11 January 2010

<www.guardian.co.uk/technology/2010/jan/11/facebook-privacy>.

[2] d boyd and E Hargittai, ‘Facebook Privacy Settings: Who Cares?’ (2010) 15(8) First Monday, 2 <firstmonday.org/article/viewArticle/3086/2589>; L Andrews, I Know Who You Are and I Saw What You Did (Free Press, 2011).

[3] A term from ergonomics and human computer interaction meaning one of: the actions possible with an interface; the actions that appear to be and are possible with an interface; the actions that are suggested by and are possible with an interface. See D A Norman, The Design of Everyday Things (Basic Books, 2002).

[4] N Wiener, The Human Use of Human Beings (DaCapo, 1954).

[5] T W Bynum, ‘Norbert Wieners Vision: The Impact of “the Automatic Age” on Our Moral Lives’ in Robert J Cavalier (ed), The Impact of the Internet on Our Moral Lives (Suny Press, 2005) 11-25.

[6] J Meyrowitz, No Sense of Place (Oxford University Press, 1985).

[7] M McLuhan, Understanding Media: The Extensions of Man (reprinted by MIT Press, 1994).

[8] L Lessig, Code, and Other Laws of Cyberspace (Basic Books, 1999).

[9] A R Galloway, Protocol: How Control Exists After Decentralisation (MIT Press, 2006).

[10] A R Galloway, The Interface Effect (Polity, 2012).

[11] J W Brehm, Post-decision Changes in the Desirability of Alternatives (PhD thesis, University of Minnesota, 1955).

[12] E Aronson and J M Carlsmith, ‘Effect of the severity of threat on the devaluation of forbidden behavior’ 1963 (66(6) Journal of Abnormal and Social Psychology 584.

[13] S E Asch, ‘Effects of group pressure upon the modification and distortion of judgments’ in H Guetzkow (ed), Groups, Leadership, and Men (Carnegie Press, 1951) 222-236.

[14] R Bond and P B Smith, ‘Culture and conformity: A meta-analysis of studies using Asch’s line judgment task’ 1996 119(1) Psychological bulletin, 111.

[15] J A Bargh, M Chen, L Burrows et al, ‘Automaticity of social behavior: Direct effects of trait construct and stereotype activation on action’ (1996) 71 Journal of Personality and Social Psychology, 230, and other work since.

[16] M Ross and M Conway, ‘Remembering One’s Own Past’ in R M Sorrentino, E T Higgins (eds), Handbook of Motivation and Cognition: Foundations of Social Behavior (Guilford Press, 1986) vol 1, 122-144.

[17] A A Adams et al, ‘Emerging Social Norms in the UK and Japan on Privacy and Revelation in SNS’ (2011) 16 International Review of Information Ethics 18 <www.i-r-i-e.net/inhalt/016/adams-etal.pdf>.

[18] Most of whom now use Facebook which is rapidly overtaking Mixi for both number of users and activity in Japan

[19] C Preimesberger, Facebook to Remove Search-by-Name Opt-Out Function (20 October 2013) eWeek <http://www.eweek.com/cloud/facebook-to-remove-search-by-

name-opt-out-function.html>.

[20] boyd and Hargittai, above n 2, 2.

[21] F Stutzman, R Gross and A Acquisti, ‘Silent Listeners: The Evolution of Privacy and Disclosure on Facebook’ 2013 (4(2) Journal of Privacy and Confidentiality 7.

[22] W E Mackay, ‘Triggers and barriers to customizing software’ in S P Robertson, and G M Olson (eds), CHI '91 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (ACM, 1991) 153, doi: 10.1145/108844.108867

<http://doi.acm.org/10.1145/108844.108867> .

[23] Matt McKeon, The Evolution of Privacy on Facebook

<http://mattmckeon.com/facebook-privacy> . McKeon an animated information graphic showing the expansion of default visibility of Facebook profile information from 2005-10. 2010.

[24] C Matyszczyk, Randi Zuckerberg loses control on Facebook (and Twitter) (2012) c-net <http://news.cnet.com/8301-17852_3-57560888-71> .

[25] C Stross, ‘Facebook: a reminder’ on Charlie’s Diary (23 May 2009) <http://www.antipope.org/charlie/blog-static/2009/05/facebook-a-reminder.html> .

[26] What a real name actually consists of is a more difficult proposition than it might appear at first glance and is too complex to go into within this context.

[27] B Henne and M Smith, ‘Awareness about photos on the Web and how privacy-privacy-tradeoffs could help’ in A A Adams, M Brenner and M Smith (eds) Financial Cryptography and Data Security, FC 2013 Workshops, USEC and WAHC 2013, Revised Selected Papers, Okinawa, Japan (Springer-Verlag, 1 April 2013) 131-148.

[28] d boyd, Taken Out of Context: American Teens' Sociality in Networked Publics (Doctor of Philosophy, University of California, Berkeley, 2008).

[29] boyd and Hargittai, above n 2.

[30] F Stutzman, R Gross and A Acquisti, above n 21.

[31] J King, A Lampinen and A Smolen, ‘Privacy: Is There An App for That?’ in L F Cranor (ed) SOUPS '11 Proceedings of the Seventh Symposium on Usable Privacy and Security (ACM, 2011) 12-20, doi: 10.1145/2078827.2078843

<http://doi.acm.org/10.1145/2078827.2078843> .

[32] A A Adams, K Murata and Y Orito, ‘The Japanese Sense of Information Privacy’ (2009) 24(4) AI & Society 327.

[33] A A Adams, K Murata and Y Orito, ‘The Development of Japanese Data Protection’ (2010) 2(2) Policy and Internet 95.

[34] King, Lampinen and Smolen, above n 31.

[35] K Lewis, J Kaufman and N Christakis, ‘The Taste for Privacy: An Analysis of College Student Privacy Settings in an Online Social Network’ (2008) 14(1) Journal of Computer-Mediated Communication 79, doi: 10.1111/j.1083-6101.2008.01432.x <http://dx.doi.org/10.1111/j.1083-6101.2008.01432.x> .

[36] boyd and Hargittai, above n 2.

[37] F Stutzman, R Gross and A Acquisti, above n 21.

[38] Ibid.

[39] Lessig, above n 8.

[40] Galloway, above n 9.

[41] Lessig, above n 8.

[42] Asch, above n 13; Bond and Smith, above n 14.

[43] boyd and Hargittai, above n 2.

[44] F Stutzman, R Gross and A Acquisti, above n 21.

[45] Galloway, above n 10.

[46] D Kahneman, Thinking, Fast and Slow (Farrar, Straus and Giroux, 2011).

[47] F Bastiat, ‘What is Seen and What is Not Seen’ in George B de Huszar (ed), Selected Essays on Political Economy (first published Van Nostrand, 1850, The Foundation for Economic Education 1995 edn) <www.econlib.org/library/Bastiat/basEss1.html>.

[48] Ibid.

[49] G Greenleaf, ‘Sheherezade and the 101 Data Privacy Laws: Origins, Significance and Global Trajectories’ (2014) 23(1) Journal of Law Information and Science.

[50] L Clark, ICO Commissioner slams EU data protection directive (2013) Wired.co.uk (2013) <www.wired.co.uk/news/archive/2013-02/07/ico-against-eu-data-

protection>.


AustLII: Copyright Policy | Disclaimers | Privacy Policy | Feedback
URL: http://www.austlii.edu.au/au/journals/JlLawInfoSci/2014/7.html