Algorithm Accountability: The Incoming Technocratic Age

Fake news, propaganda and questions about the credibility of information continue to cloud political and cultural developments that are shaping our world. Although fake news has always existed in one form or another, the current hype serves as a red herring that hides the deep political dysfunction in societies in the post-truth environment.

The primary function of the mainstream media is not to serve or inform the public, nor does the media function as a checks and balances on political power. These are merely secondary side effects if the quality of the content is sufficient. The primary goal however, is less glamourous. Most mainstream media firms are big corporations, some are even part of larger conglomerates and as is the case in any kind of business, their objective is to maximise revenues. Therefore, investigative journalism takes second place to the financial needs of the corporation. This attitude holds true for the traditional media as well as the new digital media.

Since 2004, the number of printed newspapers and professional reporters has steadily decreased, whilst the sheer amount of noise on the internet and web generated news has surged. As advances int technologies converged in the media, more media turned into producers, distributors and editors of news outlets than ever before. This shift in the distribution of news is due to the changes in advertising, in the last two decades more effective mechanisms were designed that matched commercial messages to consumers, until finally with the advent of the internet, word of mouth advertising was digitised thanks to sophisticated algorithms. Personalised advertising has many benefits, for one, the customer isn’t subjected to abstract messages that are designed for large demographic audience. Instead, the algorithm remembers the past experiences of the user and based upon that, it suggests things that the user may want to know in the future. For instance, Amazon guides its clients to products that they may enjoy next based on what they purchased recently. This powerful impact of personalised commercial messages is what makes platforms like Google and Facebook so inciting for customers. However, what is good for the marketplace and consumers doesn’t hold the same benefits for the traditional media and the distribution of information. Less advertising has resulted in less revenues, which has led to cutbacks in resources. Moreover, the algorithms that personalised advertising and narrow down choices are also responsible for the personalisation of news feeds.

Currently, roughly 60% of Americans get their news from social media platforms like Facebook and Twitter, which offer an almost endless stream of stories and items. The primary purpose of the algorithms that spread the news is to keep the user scrolling for as long as possible. Since people tend to read and share information that supports their pre-existing beliefs, most algorithms are based on the reinforcement theory. This is why Google offers different search results for people with differing political ideas. The same is true for Amazon, which recommends different kinds of books for liberal and conservative camps. So, in a twist of irony, the tools of mass communication are so refined that they function as personalised means of transmission. However, news that is fed by reinforcement-based algorithms isn’t necessarily accurate. Most web generated news distributors have no incentives to remain impartial, or present the truth, as their business plan is based on the number of clicks or views.

Perhaps the best statement that captures the impact of this is the adage ‘on the internet, nobody knows you’re a dog’.

Moreover, reinforced news also facilitates a filtered awareness that promotes a clickbait culture. People who share ideas and interests are coiled tightly together, while vital but unpleasant issues are neglected. This system essentially tells readers what they want to know, not what they need to know. Some societies are more affected than others, but overall, the personalised news feeds contribute to the polarisation for the cultural and political landscape. Meanwhile, the number of people who consume genuinely impartial information remains fairly limited. The exposure of reinforced news is so effective, it outperforms the mainstream press in finances and manpower. In this framework, reinforcement-based algorithm news has brought yellow journalism to the forefront, which is fixed to the demise of the mainstream media.

In the words of Mark Twain: ‘History may not repeat itself, but it often rhymes’.

The unfortunate truth is that personalised advertising and news is returning the media to the circulation of information during the partisan era. Yet, unlike the past, personalised content is more refined. Perhaps the pinnacle moment of modern fake news occurred during the US elections of 2016, when the Internet Research Agency, which is a Russian troll forum based in St Petersburg, created 120 false pages on Facebook and posted more than 80000 messages that reached 29 million Americans. The exposure surged to 150 million after the posts were shared and liked. Whether the Russian exploitation of the social media algorithms had a decisive impact on the election results is yet to be known, but similar Russian activities have targeted Europe, where the Kremlin seeks to bolster the digital presence of populist and nationalist movements.

In the past, non-Russian sympathisers who were susceptible to Russian propaganda were dubbed as ‘useful idiots’, a term devised by Vladimir Lenin. At the time, the useful idiots were not communists, but they were so firmly convinced that the West was decadent, they were prepared to join whatever opposition was available. Today, a similar trend of useful idiots is trending on social media platforms, where Europeans and Americans who disagree with their respective authorities are being indirectly exploited by the Kremlin. For instance, the Russian Internet Research Agency exploits the reinforced algorithms to accelerate the spread of fake news, thereby undermining the geopolitical foes of Moscow. In the realm of fake news and information rivalry, the Russians hold one fundamental advantage over the West – the English language is far more widespread than Russian, making American retaliation capabilities somewhat limited by the language barrier.

From the Russian perspective, the interference in the American cultural and political landscape isn’t surprising. Since the 1990s, Washington has used NGOs to shape Russian domestic politics, so for Putin’s government, the current cyber efforts are payback and can be viewed as a cyber version of the divide and conquer strategy that falls within the rivalry between Moscow and Washington. It’s not just Russia that has troll factories, jihadist groups are also disproportionally present on social media platforms, and most authoritarian governments have their own state controlled version of Facebook, where they monitor civil obedience. Meanwhile, in democratic nations, governments, corporations and institutions use experts, interviews and data to make themselves crucial to the process of investigative journalism. The point is, every country, organisation or entity has its own measures to manipulate the media.

However, what makes the future of fake news truly terrifying are the latest innovations in facial and vocal programs. Software like Lyrebird can forge a vocal library by recording an example of someone’s voice. Other operating systems touch on the realms of facial motion capture, such as Deep Video Portraits, which uses algorithms to digitally superimpose faces onto other bodies. The combination of these facial and vocal innovations grants endless possibilities for mainstream films. Such software presents a set of tools for malevolent purposes. Hackers could use these programs to launch sophisticated phishing attacks. For instance, scammers could leave a convincing instruction by voicemail, ordering an employee to open an important email attachment which would then insert a trojan horse into the network. Or, the scammer could simply instruct the employee to wire funds to a confidential business account.

The criminal exploitation is unsettling, but perhaps the biggest potential for these tools lies in the media. False information is readily distributed on the internet, but the facial and vocal programs could revolutionise the distribution of fake news. Even if the final product has flaws, a significant segment of the populate will still believe it to be authentic. Due to these advances in video, audio and artificial intelligence, in the nearby future, web generated fake news will so realistic that we will no longer be able rely on our senses. Even the mainstream media will be unable to tell the difference between authentic and false footage.

Generative Adversarial Network used to create a Deep Fake of Vladimir Putin

In the 17th century, French philosopher Rene Descartes wrote about the perception of the truth, and how human senses can sometimes be deceptive. He gave an example of a stick halfway emerged in water, from a certain angle it appears bent, but if one withdraws the stick, it appears straight. This shows that the initial belief was false. Although such optical illusions are easy to recognise, modern programs are pushing the human perception to new boundaries in which we increasingly can’t rely on our senses to reveal the truth. In this context, the internet is changing from a unifying force for social solidarity to a space of reinforcement-based clusters.

The tech companies, like the partisan newspapers of the past, reject responsibility for the content they publish and promote. When Facebook was criticised for the role they played in the role of pedalling fake news during the US elections in 2016, Mark Zuckerberg posted a statement denying accountability, stating the goal of Facebook ‘is to give everybody a voice’. Eventually the surge in fake news will force people to be more sceptical and rational, but the impact of the new, emerging technologies makes the future seem uncertain.

That being said, computer scientist Tim Berners-Lee who invented the world wide web, expressed a more optimistic view of the future of the internet, when he stated that the project is by no means finished – ‘it’s possible to media platforms that show us what we don’t know rather than reflecting of what we want to know’.

While deep fakes indeed pose a threat to our privacy and security, this threat is far from insurmountable. When taking action on deep fakes, our legislators should eschew rash solutions that will be detrimental to long-term AI advances and instead pursue policies that address the underlying issues in technology culture and education. By incentivising private industry against deep fakes, promoting AI research, and improving equity in and the ethics components of domestic STEM education, we can effectively mitigate deep fakes, strengthen our domestic AI capabilities, and continue to foster the environment of innovation which gave us such frightening and fascinating technology in the first place.



Nevertheless, fake news is going to get far worse before it gets any better.

Bibliography:

Angwin, J., Larson, J., Mattu, S., Kirchner, L.: Machine bias. ProPublica (2016). URL https://www.propublica.org/article/machine-bias-risk-assessments-in- criminal-sentencing Barocas, S., Selbst, A.: Big data’s disparate impact. California Law Review 104, 671– 732 (2016)
• Barry-Jester, A.M., Casselman, B., Goldstein, D.: The new
science of sentencing. The Marshall Project (2015). URL
https://www.themarshallproject.org/2015/08/04/the-newscience-of-sentencing.
• Burrell, J.: How the machine ‘thinks’: Understanding opacity
in machine learning algorithms. Big Data & Society 3(1)
(2016)
• Calders, T., Verwer, S.: Three naive bayes approaches
for discrimination-free classification. Data Mining and
Knowledge Discovery 21(2), 277–292 (2010)
• Calders, T., Zliobaite, I.: Why unbiased computational
processes can lead to discriminative decision procedures.
In: B. Custers, T. Calders, B. Schermer, T. Zarsky (eds.)
Discrimination and Privacy in the Information Society, pp.
43–57 (2013)
• Chouldechova, S.: Fair prediction with disparate impact:
A study of bias in recidivism prediction instruments. arXiv
preprint arXiv:1610.07524 (2016)
• Christin, A., Rosenblatt, A., boyd, d.: Courts and predictive
algorithms. Data & Civil Rights Primer (2015)
• Citron, D., Pasquale, F.: The scored society. Washington
Law Review 89(1), 1–33 (2014)
• Corbett-Davies, S., Pierson, E., Feller, A., Goel, S., Huq, A.:
Fair algorithms and the equal treatment principle. Working
Paper (2017)

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: