Reflexivity: Why We Must Choose to Shape, and Not Be Shaped By, Technology

Reflexivity: Why We Must Shape, and Not Be Shaped By, Technology Blog Image

In her new book, Consent of the Networked, Rebecca Mackinnon offers a reality check: "We have a problem,” she writes. “We understand how power works in the physical world, but we do not yet have a clear understanding of how power works in the digital realm." In fact, we probably don't even think about power when we update our statuses on Twitter, connect with old school friends and upload pictures on Facebook, buy a book based on a recommendation from Amazon or use Mail, Docs, Plus, Maps or search on Google.

Software -- from computer games to web services from Wikileaks to Amazon to Match.com to Google to Facebook -- is suffused with the principles decreed by the context in which it is produced. It is not, as Prof Melvin Kranzberg would say, “neutral.” These spaces, places and technologies permit and discourage certain kinds of uses, argue Abi Sellen and Richard Harper.

The architecture of the internet and the designs of the web systems we use are the scaffolding upon which the people in charge of the new world -- the Larry Pages, the Sergey Brins, the Mark Zuckerbergs in the West, the creators of RenRen, Mixti, Sina Weibo, Baidu and others in the East, the architects of m-Pasa in Africa -- are using their agendas to set the status quo for how we service our basic human values. These agendas -- which they may or may not be aware of -- shape information and how we make sense of it.

In this post, I won’t argue a technological determinist point of view, but will proffer one reading of the systems many of us use in our personal and work lives. I hope to inspire reflexivity in web users, technology implementers and web service developers. I raise concerns about “techno- fundamentalism,” Siva Vaidhyanathan’s description of people who blindly accept what they are delivered -- invisible risks and all -- rather than thinking of how it could be otherwise.

In order to avoid blindly falling into a techno-fundamentalist trap, we need to understand how the human is constructed within the tools we use -- based upon the design decisions that come from the sum of the social experiences of the developers: a tenet of my philosophy as a social psychologist. One way is to look at the ways human needs are fulfilled by services like Google and Facebook -- systems that not only help us to make sense of the vast ocean of information that's online, but most of the non-Western world as well. How are human phenomena like trust, identity, privacy, freedom, power, relevance, value and discovery embedded in the software designs?

Here, I’ll consider two of these human needs -- relevance and identity -- and how the two most powerful entities in our web space construct them.

GOOGLE

The human value Google delivers is "relevance.” It tells us what is the most appropriate information that will serve our needs. To that end, Google has become a de facto window to the world of knowledge.

Google’s mission states their business case: 'to organize the world's knowledge and make it universally accessible and useful'.

And here’s the most important bit of this statement: 'make it ... useful'. After all, their ability to deliver a useful thing that solved your problem is what made them market leaders.

How Google judges “usefulness” is programmed into their algorithm, outlined by Ken Auletta in his book Googled: information is considered "more useful" by weighing up how often it's been linked to elsewhere on the web, the number of times it's been clicked on by users who've searched for similar information and whether the information comes from a "reliable" source.

Although this system is automated to save time and deliver results as fast as possible, there are human value judgements that are implicit in each of these core components that someone at some point had to make a decision about in order to program them in.

First, the collective intelligence of the number of links a site has is based on the philosophy that the crowd is always wise. Indeed it can be, but crowd behavior can also result in widespread conformity, like the many instances documented in the 1784 book Extraordinary Popular Delusions and the Madness of Crowds, written by Charles MacKay. Conceptual contagions and hysterias like the South Sea Bubble and Tulipmania that MacKay describes can be translated into modern phenomena: the housing (and web) bubble of the last two decades and -- one from my own youth -- the Cabbage Patch Kid craze. Conformity for the sake of it isn’t necessarily a good thing.

Second, the infinite loop of feedback from other Google users based on a metric of "similarity" supposes that the system knows who you think you are based only on your actions (a problematic metric), and that it can judge whether you are "similar" to someone else. This gets to the heart of a very topical issue, as Google has recently updated its privacy policy.

Your "similarity" to another person is based on the enormous cloud of information that the service has collected on you, across all of its services, and across all of the sites that it provides services. And so you are reduced to a bin of actions that can be categorized -- for simplicity -- and cross-referenced -- for similarity.

The value judgements implicit in this process include the categories that Google chooses to lump you in, the keywords that it looks for and, perhaps most significantly, the openness of personal information that is assumed when you use their services.

As an aside, you can see what Google thinks you are. The system might get it wrong. For example, a month ago, Google thought I was male.

Finally, the qualitative ranking given to sources of information -- like whether it lives on the The New York Times or Jane Blogger's Home Page -- carries with it an enormous value judgement about what is "reliable", and carries with it into the online world the power structures of offline information hierarchies. This is where Google’s political and commercial allegiances become very important. It is no secret that they were willing to play ball with the U.S. government's National Security Agency in 2010, but were unwilling to continue their relationship with the Chinese government in 2011, despite a previous four years of togetherness.

And by looking at how the company plans to evolve in the future -- by delivering “serendipity” -- exposes even further how the company constructs the human. I’ve discussed that in another post on this blog, and don’t need to repeat myself here.

My question is whether you believe in the construction of humanity delivered by Google, and whether they achieve in providing a solution to the human need of "relevance" in a way that chimes with what you believe in.

FACEBOOK

Facebook also provides relevance, but it does so in a different way to Google: rather than place the emphasis on a service to deliver relevant results, Facebook is a platform for relevance to be derived through social ties. It delivers discovery in a way that is vetted by the network of people whom you trust enough to call "friends.” And to do this, it makes some assumptions about personal identity and human relationships.

Facebook was developed within a social context in which identifiability is tantamount. I live in a culture obsessed with hallmarks of trustworthiness; reading headlines about the web makes it clear that there’s a lot of fear about what harm that can come to us from anonymous others. Other hugely successful social networks that have been developed around the world -- like Japan's number one service Mixti or the mobile phone social networks in India like RocketTalk and Mig33 -- are anonymous. So tying offline and online identity together is not the only successful solution to serve the human need of connection.

Facebook also constructs interpersonal relationships in a particular way: it demands that they are reciprocated; it demands -- through the newsfeed -- that they're open; it constructs them as equal, regardless of whether you met someone at a conference or in a bar, or you've been lifelong friends. And this implies that you have one identity, and one that is relatively static because it can be perpetually referenced.

All of these constructions of human social relationships can be directly traced back to Mark Zuckerberg, Facebook's benevolent dictator, who has openly said that we have one identity, and that privacy is dead.

Additionally, the mechanics that Facebook has integrated that allow you to define your online self under-represent the nuances of the messiness of human identity. Writer Jaron Lanier describes this as "self-reductionism": the boxes that we tick in our Facebook profiles construct us -- for the purposes of database categorization -- as a collection of self-selected keywords that happen to be particularly useful for search engines and advertisers.

The computer is less likely to bump us up against something or someone that doesn't match one of the categories we've selected, which means we increasingly rely on our existing networks to fulfill our need for connection. Facebook is thus inherently inward-looking. One undergraduate I spoke with recently commented that, if you want to meet new people, you go to a dating site. What’s happening to the online communities of practice so celebrated by theorists like Barry Wellman?

Facebook, the dominant "social" network on our planet is about reinforcing rather than forging bonds. Does it deliver a construction of "identity" and "connection" that matches your own?

CONCLUSION

Systems like Google and Facebook -- which arose and are successful because they service human needs -- are affecting what new information and new people we are exposed to.

At last year’s Decade in Internet Time event last year at the Oxford Internet Institute, Laura deNardis argued that, "designs of technology are arrangements of power." We can be more critically engaged and empowered if we learn more about the negotiations of power between corporations who are trying to dominate this space, the governments who are trying to control this space and the users who are colonizing the space.

How might the web have been different if any of the other information architectures that were being developed at that time had been successful? This poses an important question about the assumptions we don’t make when we use online systems. Rather than asking, "What does it do?"; perhaps instead when we use a service to fulfill our needs, we should be asking "What does it mean?"

Banner image credit: Corie Howell http://www.flickr.com/photos/coriehowell/3514141273/