#Republic: Divided Democracy in the Age of Social Media Cass Sunstein, Princeton University Press, 2018
The Attention Merchants: The Epic Scramble to Get Inside our Heads Tim Wu, Penguin, 2016
On April 10, Mark Zuckerberg – looking pale and a little tense – sat down to testify before the US Senate for the first time in his 14-year tenure as Facebook’s founder and chief executive. Recent revelations had shown that the UK-based Cambridge Analytica had used illegally obtained Facebook data to run ads supporting Brexit and Trump. It was not the first time Zuckerberg had found himself in hot water. In 2003, as a computer science student at Harvard, Zuckerberg created facemash.com, a site that allowed Harvard students to rank each other’s attractiveness in side-by-side comparisons. The front page read: ‘Were we let in for our looks? No. Will we be judged on them? Yes.’ Facemash quickly received angry letters, complaints, and eventually disciplinary action. Zuckerberg took the site down, and emailed apology letters. He wrote: ‘I hope you understand, this is not how I meant for things to go, and I apologize for any harm done as a result of my neglect to consider how quickly the site would spread and its consequences thereafter.’
How quickly the site would spread and its consequences thereafter. Zuckerberg’s claim to naiveté, which helped him escape serious disciplinary trouble in college, is less potent today. The Cambridge Analytica scandal unleashed a torrent of discontent that had been simmering since news about Facebook’s role in the distribution of ‘fake news’ stories in 2016. This scandal, however, hit closer to home: 87 million users, most of them US citizens (and including this author), had their data (likes, education history, birthday, etc.) scraped and sold to Cambridge Analytica, a political consulting company. The firm – now closed due to mounting legal fees and backlash – leveraged this data to create psychometric profiles of those users and target them with pro-Trump 2016 ads: the same strategy used to encourage people to vote Leave in the Brexit referendum earlier that year.
It is safe to say that we are living in an era of big tech backlash. In the early 2000s, when social media was new, Google had just had its initial public offering, and Twitter didn’t exist, the internet promised connection and, potentially, a levelling of inequality. The Arab Spring of 2010 seemed to point to the role of social media in spreading, not eroding, democracy. Massive online open courses heralded a new future in educational access. But now big tech is a whipping boy for political division. The rush to critique the methods and mottos of the ‘big four’ (Google, Facebook, Amazon, and Apple) is only matched by the once-ubiquitous enthusiasm, a decade ago, to lionise them. Riding the wave of criticism is Tim Wu, an American lawyer, whose view on big tech companies is informed by a critical look behind the curtain at the business structure that supports and sustains them. He describes the rise of this new class of business in his 2016 book: The Attention Merchants.
Wu launches his story not in 2001, when Google launched AdWords, or even in 1978, when the world’s first spam e-mail was sent. Instead, he begins in 1833, when a 23-year-old New Yorker named Benjamin Day decided to create a newspaper that he could sell for only a penny. It was a novel idea. Newspapers in the early 19th century sold for approximately six times that amount, and were only read by New York’s rich elite. But Day had a critical epiphany which ultimately catapulted his newspaper, later called the New York Sun, from obscurity to stratospheric success. He realised that with a low, low price and eye-catching tabloid headlines, the Sun could survive and thrive on advertising revenue alone. The reader was not the consumer. The reader was the product.
Day is Wu’s first ‘attention merchant’, and the New York Sun is the first business he identifies which attracted a huge following by offering near-free services, combined with an elephantine amount of advertising. From there, Wu argues, ads made a slow incursion into all the facets of everyday life, picking up media along the way. They found their way from posters in 19th-century Paris to the corridors of Madison Avenue in New York, and from there to radio spots, television commercials, and finally – in the sunny streets of Silicon Valley – to the internet.
This business model is so deeply embedded in the internet that today it would seem bizarre if Google charged its users per search, or if Facebook established a monthly fee for use. We have grown accustomed to our attention – attention to videos, our friends’ profiles, or search engines – serving as a valuable commodity in and of itself. But the attention economy wasn’t always a given. Larry Page and Sergey Brin, the co-founders of Google, initially avoided advertising on the search engine, and floundered in search of a sustainable business model. Page once wrote that ‘advertising funded search engines will be inherently biased towards the advertisers and away from the needs of the consumers.’ He was right: one of Google’s early competitors, a search engine called GoTo.com, sold its results to the highest bidder. (GoTo.com was eventually purchased by Yahoo!, and then sued by the US Federal Trade Commission for unlabelled advertising.) Meanwhile, although Facebook initially recognised the importance of advertising to offset server costs, the early iterations of Facebook ads were simply sidebar notices in the form of ‘flyers’: largely campus-focused suggestions for job applications and sponsored events.
But eventually, both Facebook and Google realised that their ultimate cash cow lay in an enhanced form of targeted advertising. Targeted advertising began in earnest in 1978, when New York University professor Jonathon Robbins started comprehensively categorising the US population. Relying on ZIP codes (US postal codes) and census data, he classified 40 distinct ‘segments’ of Americans, filtering them by age, wealth, and interests. He called his system ‘Potential Ratings in ZIP Markets’, or PRIZM. He even assigned each of his segments amusing names, which still ring true in 2018 America: ‘Bohemian Mix’ for single people living in places like New York City’s Greenwich Village, ‘Shotguns & Pickups’ for rural hunting enthusiasts, and ‘Young Influential’ for up-and-coming suburban two-paycheck families.
Robbins’ system was considered ingenious at the time. But PRIZM pales in comparison to the laser-focused targeting of modern internet companies. Robbins only had static ZIP codes and census data, but Google and Facebook had huge streams of data unravelling in real time. Google used its data to develop AdWords, a system that collected its users’ searches and then peppered them with ads related to searches for ‘mortgages’, ‘baldness’, ‘high heels’, and more. (‘Big data knows you’re pregnant!’ a TV station declared in 2014.) And Facebook, with the help of the ‘like’ button and the information that users were already freely providing – their location, education, workplace, interests, and more – began to hand over huge quantities of demographic information to advertisers. Looking for 18- to 25-year-old women educated at Oxford University with an interest in cooking and literature? Facebook could provide them.
To maintain advertisers, and rake in cash, internet companies simply have to maintain the users’ attention. On the social media side, the attention merchant model is kept afloat by flows of addictive content: videos, photos, articles and memes. Facebook, Instagram, Snapchat, and YouTube use a combination of engineers, psychologists, and designers to figure out the right tools to keep users watching, clicking, and engaging. But unlike Benjamin Day, who wrote his own articles, or television stations, which provide their own programming, these companies do not produce content. They merely host it. As a result, the value of social media is dependent not only on its infrastructure of servers and algorithms but also on the donation of entertaining material by users themselves. Wu calls this a ‘virtual attention plantation’, writing, ‘The public [become] like renters willingly making extensive improvements to their landlords’ property, even as they [are] made to look at advertisements.’ Some users, it’s true, make money through their content. The new generation of social-media influencers rely on platforms like Instagram and YouTube to earn cash and build up a brand. But the rest of us provide our content, and view the ads, for free.
One of the central tenets of capturing attention on the internet is also deceptively simple: show people content that they like. The same techniques and algorithms that target advertising also personalise, and ultimately fragment, the user base. Tech companies are open about their enthusiasm for this type of content filtering. Facebook, after changing the News Feed in 2016, announced: ‘[The News Feed] is subjective, personal, and unique – and defines the spirit of what we hope to achieve.’ Former Google CEO Eric Schmidt told the Wall Street Journal in 2010 that soon: ‘The technology will be so good it will be very hard for people to watch or consume something that has not in some sense been tailored for them.’ This vision of a personalised society, with everyone consuming content selected and filtered by algorithm, may seem utopian to some. But for others this networked system of ‘filter bubbles’ and ‘echo chambers’ is problematic at best, dystopian at worst.
This is the premise of Cass Sunstein’s new book #Republic: Divided Democracy in the Age of Social Media, and although Sunstein shies away from a full-throated critique of Facebook and its practices, he hand-wrings with the best of them on social media’s effects on society. Sunstein, a constitutional law professor at Harvard, has in fact written #Republic a few times before: it was released in 2001 as Republic.com, and then in 2007 as Republic.com 2.0. The new version focuses primarily on social media, but it’s telling that Sunstein’s basic argument has not changed over the past decade-and-a-half of evolving technology. Despite his modern credentials – he co-wrote the popular behavioural economics book Nudge (2008), and served in the Obama administration – Sunstein is traditional, at least in the way that the constitutional scholars tend to be traditional. His fear of the internet is rooted in old-school principles about how conversation, and discourse, should operate.
Sunstein argues that the architecture of the internet has diverged from a democratic, deliberative ideal. Social media has ushered in an ‘architecture of choice’, rather than an ‘architecture of serendipity’ where we stumble upon alternative views. What we consume on the internet is overwhelmingly determined by what Facebook, Instagram, Twitter, and YouTube think we will like. Our News Feed is peppered with articles shared by friends of a similar ideological bent. On Twitter, we follow journalists, academics, and celebrities with whom we largely agree. ‘Self-insulation and personalisation are solutions to some genuine problems,’ Sunstein writes, ‘but they also spread falsehoods, and promote polarisation and fragmentation’.
It is the latter two points that worry him the most. Democracies around the world have always existed on tenuous ground, vulnerable to authoritarian capture but also reliant on the dedication and engagement of their peoples. US Supreme Court justice Louis Brandeis wrote that ‘the greatest menace to freedom is an inert people … [and] public discussion is a political duty.’ In a world in which we exist in filter bubbles, a truly public discussion – one that takes place on the sidewalks, in parks, or in other types of ‘public forums’ that Sunstein looks back on nostalgically – cannot take place.
There is evidence to back this up. MIT’s Media Lab found that during the 2016 election journalists and Trump supporters existed in quantitatively distinct social worlds on Twitter, with few mutual followers or connections. According to a study by its employees, Facebook’s old News Feed algorithm suppressed liberals’ exposure to conservative content by 8% and conservatives’ exposure to liberal content by 5%. Call it a Balkanisation of public life. Or the ‘architecture of choice’, as Sunstein does.
But is it choice? On Twitter, perhaps it is: you choose who you want to follow and that influences the feed. But Facebook’s News Feed, and even Google’s search results, are influenced not by toggling established settings but by the plethora of personalised data that the companies collect in order to serve more targeted ads. A 2013 paper found that Google’s search results varied by approximately 11% depending on browsing history, email data, location, and more. Our micro-actions every day influence the content that we see every day thereafter. But we hardly “choose” to expose ourselves only to particular views. It is the architecture of the internet itself that formulates filter bubbles.
And so what Sunstein misses is the context that Wu provides: the commodification of attention drives filter bubbles. Facebook is financially incentivised to keep users on the site maximally engaged so that they will serve up more advertising ‘impressions’ that the platform can profit from. Is surprising that Facebook tries to show its users what they want to see, thus exploiting users’ well-documented confirmation bias? Or that this targeting model is particularly vulnerable to political propaganda and attack ads? Last month, MIT researchers found that fake stories travelled six times faster than real stories on Twitter – and that fake news was 70% more likely to be retweeted. If Facebook failed to stamp out fake news during the 2016 election, it wasn’t because of a lack of technological ability, but rather a lack of will.
There is no business incentive to stamp out fake content, unless a public backlash forces it. Similarly, no business incentive to pop filter bubbles and diversify the worldviews of users.
And this, ultimately, is where #Republic falls short. Sunstein proposes his own micro-fixes for the filtering problem: a ‘serendipity’ button on Facebook, which adds randomness into the News Feed, linking to alternative views on partisan news websites (the New Yorker suggesting an article by the National Review), etc. There have been some efforts in this arena already. PolitEcho analyses your News Feed to assess its political bias, and Read Across the Aisle suggests a mix of right and left journalism. But as Sunstein knows, given his background in behavioural economics, defaults matter. Few people will take the time to click on a link to an opposition ideological position; even fewer would opt-in to a serendipity button. The internet will remain an architecture not of choice or serendipity, but of advertising, unless there is a change in the underlying business model.
The biggest surprise of the Cambridge Analytica scandal may be that it was a surprise. Most of the public, it seems, had not considered that their data was embedded in an attention economy. The Cambridge Analytica scandal was only a scandal because someone besides Facebook, a researcher named Aleksandr Kogan, scraped and sold the data illegally. Facebook, on the other hand, monetises its users’ data (legally) all the time, in cooperation with political and corporate advertisers alike.
Now, for the first time, that business model is in question. In Zuckerberg’s recent Senate appearance – a four-hour testimony punctuated by awkward pauses and Zuckerberg’s insistent ‘Senator, yes’ and ‘Senator, no’ – senators pressed him on whether the company had ever considered alternative methods of making money.
A senator from Florida asked if the Facebook CEO had thought about the subscription model. Zuckerberg demurred. ‘In general – we believe the ads model is the right one for us, because it aligns with our social mission of trying to connect everyone and bring the world closer together’, he said. (Sunstein would likely disagree with the idea that the algorithmic filtering of ads helps ‘bring the world closer together’.)
But while Facebook may not be actively considering the subscription model, others are. Roger McNamee, an early investor in Google and Facebook, wrote an op-ed in February for the Washington Post, called ‘How to fix Facebook: Maker users pay for it’. According to company earnings reports, Facebook raked in an average advertising revenue in 2017 of $82.44 for each user in the US and Canada. (European revenues were around ¼ of that amount.) McNamee suggests a roughly $7 monthly fee for Facebook use in the US and the development of a premium News Feed, with curated newspaper, blog, and video content. Initially, he argues, Facebook could offer a choice between the ad-based model and the subscription model. ‘Customers who remained on the advertising-supported service would still be subject to filter bubbles, addiction and manipulation, but growth in subscriptions would reduce the population of affected people’, he writes.
This idea is popular with Tim Wu, who told the Atlantic in 2016 that internet users need to ‘Suck it up and pay’ rather than endure the attention merchant business model forever. A subscription model would, at least, have prevented Facebook accidentally allowing the sale of 87 million users’ data. It could even have made the company more attuned to the problems of fake news in the lead-up to the 2016 Brexit vote and the US presidential election.
But it’s not clear, at least to this writer, that the subscription model solves all problems (although it is a more promising suggestion than Sunstein’s ‘serendipity button’). Even with a subscription, attention is still a prized commodity. Netflix and Spotify both offer highly personalised lists and suggestions – yes, serendipity abounds, but only within a curated environment. Under conditions of a subscription model, Facebook would still not become an agora for deliberative democracy. Social connections with particular political groups would continue to dominate, siphoning the public into different spheres of communication. Meanwhile, the introduction of a subscription model would likely cause some users to run for the hills, others to close down their accounts, and still others to simply accept the advertising model, because after all, it’s free. At worst, the web would become segregated: lower-income, more vulnerable communities would continue to be bombarded by ads while the rich could keep their attention, and their data, secure.
At one time, users hoped that regulation would never distract from the cowboy free-for-all that was the early web. In 1996 John Perry Barlow wrote in his ‘A Declaration of the Independence of Cyberspace’: ‘Governments of the Industrial world… You have no moral right to rule us nor do you possess any methods of enforcement we have true reason to fear.’ These days, regulation seems inevitable and needed. While the US Senate bickers and quizzes Zuckerberg, the European Union has passed the General Data Protection Regulation (GDPR), which gives users the right to obtain and examine the data companies have collected about them and delete their data on request (the ‘right to be forgotten’). It also requires companies to disclose data breaches within 72 hours, and prepare easy and legible terms and conditions. It’s an aggressive and groundbreaking law, one that could have far-reaching implications for how internet companies in Europe do business. But, as has been the case since the beginning of tech (Facebook’s own informal motto used to be ‘move fast and break things’), companies try to stay a few steps ahead. In advance of the May 25 implementation date, Facebook quietly changed the terms and conditions for 1.5 billion users in Africa, Asia, and around the world – from their data being headquartered in the Ireland office (and thus subject to the EU law) to the US office in California. These users will no longer be protected by the GDPR.
If Zuckerberg’s Congressional hearing was any indication, American lawmakers may not know enough about internet companies to regulate them. Jokes circulated on the internet after the testimony about some of the senators’ stranger questions, which included fundamental misunderstandings of the business model and basics of Facebook’s platform. But some, like Zeynep Tufekci, a scholar of social media, have already suggested primary points for regulation: clear opt-in and opt-out mechanisms, access to all personal data (including any inferences the company’s algorithms have made about that data), and time-limited data utilisation. You should be able to know, she argues, whether companies think you are a Democrat or a Republican, and there should be a firm time limit on the usage of those inferences. ‘The current model of harvesting all data,’ she writes, ‘with virtually no limit on how it is used and for how long, must stop’. Her suggestions sound a lot like GDPR, and it remains to be seen whether the US, Canada, and the other 1.5 billion users around the world will be able to take advantage of such protective regulations.
The scary thing about the likes of Facebook and Google is not their uniqueness but their ubiquity. While the big companies do have a global hegemony over the technological world, they also rest on a business model – the attention economy – that is shared by smaller-scale apps, websites, and products. Our attention is a commodity that hundreds of companies, day in and day out, are trying to grasp and maintain for the purposes of advertising. ‘What are the costs,’ Wu asks, ‘to a society of an entire population conditioned to spend so much of their waking lives not in concentration and focus but rather in fragmentary awareness and subject to constant interruption?’ A destabilised democracy, perhaps; a world filled with social and mimetic obsession. But it’s worth remembering that at one point the internet was filled with the promise of levelling existing inequalities, globalising the world, and creating democratic deliberation. A global community, if we could keep it. SHANNON OSAKA reads an MPhil in Geography at Worcester. She grew up in Silicon Valley and has a strong aversion to airpods.