I got to thinking a few days ago as I perused status updates on my Facebook page. Most people were posting inane comments, humorous pictures and videos, but then I noticed one of my friends had posted a link to an Oxfam video on land grabbing in Uganda. I watched it, and wanted to acknowledge it, and realised that Facebook was heavily biased towards preventing this sort of article from spreading as effectively as videos of cats doing cute things. The only option you have is to ‘Like’. Now, I don’t know about you, but I don’t really want to like a video about 22,000 plus people being forced out of their homes and off their land, because, quite frankly, I don’t like it.
This is a really important issue because of the way that the algorithms work. On a personal level the site tracks what you like and provides similar ‘stories’. Therefore, if you don’t click ‘like’ on status’ that you think are important then you wont see similar such posts in the future. And to the wider community the posts that spread are the ones that are ‘liked’ by more people, therefore articles that might be deemed important but unlikeable are not going to spread.
Google+ has the right idea with the provision of a +1 button – it is neutral and is simply an acknowledgement of the posts’ importance. Do you think we could get Facebook to adopt something like this and call time on the ‘like’?
I was pointed in the direction of Eli Pariser’s TED and RSA talks on this exact subject when I was chatting about it. I have made a few notes and thought I’d just post them here to give an outline of some of the self-censorship issues that we might face if we let websites track and use our browsing information in order to create a bubble for each individuals’ online information provision.
Eli Pariser – from his RSA talk about ‘The Filter Bubble’
Concern that the Internet is not as genuinely connective as it seems.
1 in 11 humans use Facebook and 50% of traffic to news websites comes from shared links. When seeking out friends who disagree with me, I realised they had been edited out – it saw what I was clicking on and liking – providing me with links I agree with.
WHY is this the case?
Executive Chairman of Google Eric Schmidt says if you start at the beginning of history there would be 5 exabytes of data (conversation, books, art etc) That’s about 8million 80gb ipods. Now, the same amount of data is poured online EVERY 2 days.
Companies have realised that if you can sort through all this data you can make good money.
The notion of ‘if you like this, you’ll like that’ means algorithms look for people like you in terms of preferences, and then also look at groups of products that are similar.
There is a scary statistic that the main websites each put average of 67 cookies and other bits of tracking on your computer when you visit.
Companies don’t even need much data to make these extrapolations: just 5 data points. The company Hunch says it can predict your consumer choices correctly with 5 data points. Even if you don’t have anything on your own profile, it can extrapolate your preferences from two of your friends.
Everyone now gets their own customised results and as of December 2009 there is no standard Google search.
When two of his friends searched on Google the word Egypt, there were very different results:
• Person A: Crisis, Protests, Lara Logan
• Person B: Travel, News and CIA World Factbook.
We are drifting apart online – seeing different pictures of the world, all because of the buzzword Relevance (the things you click the most because clicks mean money and that is what it is based upon)
You don’t choose what gets into your filter bubble so you therefore don’t know what is outside of it and what you’re missing.
There are 3 Problems with this notion:
1. DISTORTION – if you don’t know what is outside you don’t know quite how distorted your view of the world is. If you are watching a news slant (known biased) program you have an idea of this imbalance, but you don’t in the filter bubble context.
The like button provides a very particular balance. It is positive word and contains no neutrality. You can’t like certain things because of the phrasing and then they just disappear.
2. THE PSYCHOLOGICAL PROBLEM OF OBESITY – Take Netflix as an example. Some films zip through. These are the ones that are easier to watch (blockbusters, trashy, escapism movies) but then ones that are harder to watch float about and don’t get sent (documentaries, holocaust, art house, foreign etc).
But rather than having a balanced diet of information it is easy to go completely the wrong way because of the algorithms that provide it. There is an internal tug of war between our immediate present self and our aspirational self.
Traditional information feeding and gate keepers gives you some Justin Bieber and some Afganistan but the Filter Bubble is focussed on what you are going to click most/next. Therefore information junk food might appear to be the only thing that exists.
3. CONTROL – this is what is at stake. The algorithms limit what we believe to be the options and perspectives available.
It doesn’t make sense to have just one result when searching for Obama or climate change for example but it does if its just dentist phone number or capital of Switzerland.
We don’t live in a gatekeeper-less society. It is no longer a person, but code that decides what we see/don’t see. No sense of embedded ethics like 20th century gatekeepers (eg newspaper editors). This was when people began to realise that informed citizens were important for a functioning democracy when it came to knowing about the things that they needed to make decisions on.
We need new institutions to recognise their responsibility. We need more control over when filters are at work, transparency, and ethics. Perhaps ‘important’ as well as ‘like’ on facebook.
On the Filter Bubble Website Eli Pariser has made a list of 10 things that we can do to make it harder for these algorithms to manipulate our information diet.